A View on the Nature of Consciousness

In the process of communicating with other bloggers that are interested in the future of humanity and philosophy of mind, some upcoming discussions have been planned on a number of related topics. The first topic is: the nature of consciousness and its relationship to the prospect of artificial intelligence. In preparation for the discussion, I’ve summarised my position on this topic here. I’ve spent some time reading and thinking about the nature of consciousness, so I believe that my position has some firm evidence and logical reasoning supporting it. If requested, I’ll follow up with post(s) and comment(s) providing more detailed descriptions of the steps of reasoning and evidence. As always, I’m open to considering compelling evidence and arguments that refute the points below.

The Nature of Consciousness

1. Consciousness can be defined as its ‘nature’. We seem to define consciousness by trying to describe our experience of it and how others show signs of consciousness or lack of consciousness. If we can successfully explain how consciousness occurs — its nature — we could then use that explanation as a definition. Nevertheless, for now we might use a broad dictionary definition of consciousness, such as “the awareness of internal (mental) and external (sensory) states”.

2. Consciousness is a spectrum. Something may be considered minimally consciousness if its only “aware” of external-sensory states. Higher consciousness includes awareness of internal-mental states, such as conscious thoughts and access to memories. Few animals, even insects, appear to be without “awareness” of memory (particularly spatial memory). As we examine animals of increasing intelligence we typically see a growing sets of perceptual and cognitive abilities — growing complexity in the range of awareness — though varying proficiencies at these abilities.

3. Biological consciousness is the result of physical processes in the brain. Perception and cognition are the result of the activity of localised, though not independent, functional groups of neurons. We can observe a gross relationship between brain structure and cognitive and perceptual abilities by studying structural brain differences animal species of various perceptual and cognitive abilities. With modern technology, and lesion studies, we can observe precise correlations between brain structures, and those cognitive and perceptual processes.

4. The brain is composed of causal structures. The collection of functional groups of neurons in the entire body (peripheral and central nervous system) are interdependent causal systems — at any moment neurons operate according to definable rules, effected by only the past and present states of themselves, their neighbours and surroundings.

5. Causal operation produces representation and meaning. Activity in groups of neurons have the power to abstractly represent information. Neural activity has “meaning” due to being the result of the chain of interactions that typically stretch back to some sensory interaction or memory. The meaning is most clear when neural activity represents external interactions with sensory neurons, e.g., a neuron in the primary visual cortex might encode for an edge of a certain orientation in a particular part of the visual field. There is also evidence for the existence of “grandmother cells”: neurons, typically in the temporal lobe of the neocortex, that activates almost exclusively in response to a very specific concept, such as “Angelina Jolie” (both a picture of the actress and her name).

6. Consciousness is an emergent phenomenon.  Consciousness is (emerges from) the interaction and manipulation of representations, which in biological organisms is performed by the structure of the complete nervous system and developed neural activity. Qualia are representations of primitive sensory interactions and responses. For example, the interaction of light hitting the photosensitive cells in the retina ends up represented as the activation of neurons in the visual cortex. It is potentially possible to have damage to the visual cortex and lose conscious awareness of light (though sometimes still be capable of blindsight). Physiological responses can result from chemicals and neural activity and represent emotions.

7. Consciousness would emerge from any functionally equivalent physical system. Any system that produces the interaction and manipulation of representations will, as a result, produce some form of consciousness. From a functional perspective, a perfect model of neurons, synapses and ambient conditions is not likely to be required to produce representations and interactions. Nevertheless, even if a perfect model of the brain was necessary (down to the atom), the brain and its processes, however complex, function within the physical laws (most likely even classical physics). The principle of universal computation would allow its simulation (given a powerful enough computer) and this simulation would fulfil the criteria above for being conscious.

8. Strong artificial intelligence is possible and would be conscious. Human-like artificial intelligence requires the development of human-equivalent interdependent modules for sensory interaction and perceptual and cognitive processing that manipulate representations. This is theoretically possible in software. The internal representations this artificial intelligence would possess, with processes for interaction and manipulation, would generate qualia and human-like consciousness.

Philosophical Labels

I’ve spent some time reading into various positions within the philosophy of mind, but I’m still not entirely sure where these views fit. I think there are close connections to:

a) Physicalism: I don’t believe there is anything other than that which is describable by physics. That doesn’t mean, however, that there aren’t things that have yet to be adequately described by physics. For example, I’m not aware of an adequate scientific description of the relationship between causation, representation and interpretation — which I think are possibly the most important elements in consciousness. Nevertheless, scientific progress should continue to expand our understanding of the universe.

b) Reductionism and Emergentism: I think things are the sum of their parts (and interactions), but that reducing them to the simplest components is rarely the best way to understand a system. It is, at times, possible to make very accurate, and relatively simple, mathematical models to describe the properties and functionality of complex systems. Finding the right level of description is important in trying to understand the nature of consciousness — finding adequate models of neuronal representations and interactions.

c) Functionalism: These views seem to be consistent with functionalism — consciousness is dependent on the function of the underlying structure of the nervous system. Anything that reproduces the function of a nervous system would also reproduce the emergent property of consciousness. For example, I think the ‘China brain’ would be conscious and experience qualia — it is no more absurd than the neurons in our brain being physically isolated cells that communicate to give rise to the experience of qualia.

Changing Views

I’m open to changing these views in light of sufficiently compelling arguments and evidence. I have incomplete knowledge, and probably some erroneous beliefs; however, I have spent long enough studying artificial intelligence, neuroscience and philosophy to have some confidence in this answer to “What is the nature of consciousness and its relationship to the prospect of artificial intelligence?”.

Please feel free to raise questions or arguments against anything in this post. I’m here to learn, and I will respond to any reasonable comments.

Wars of Ideas and killer robots

The book Wired for War, by P. W. Singer, is a fairly broad-ranging account of the history of how technology changes war, right up to the current rise in robotics. A number of “revolutions in military affairs” have occurred over the course of history, and there has been a very poor record of world powers making the transition before getting usurped by the early adopters. Nonetheless, the use of robots and drones in combat is the source of an ongoing debate. Still, war has existed for a long time, and as we approach a time when people may no longer be directly involved in the combat, the focus should probably be turned back to the reasons we fight.

I’m no historian, but I have some passing knowledge of major conflicts in recorded history, and I have the internet at my fingertips, retinas and cochleas. Nation states and empires began to appear in ancient times, and the Greeks and the Romans were happy to conquer as much as they could. In the Middle Ages the Islamic Empires expanded and the the European Christians fought them for lands they both consider holy. The Cold War between the Soviet Union and the United States of America was fought largely on the grounds of different societal and economic ideals: capitalism versus communism.

Though probably an oversimplification, I’m going to fit a trend to these events. The underlying basis for these wars could be described as strong convictions: “we are people of a different ethnicity or nation–we want resources, power and glory”; “we have religious beliefs–we want to spread our religion and we want what is holy to us”; and “we have different beliefs regarding how our society and economy should be structured–we want to subjugate the intellectually and morally inferior”.

Even more generally these thoughts boil down to ideas of national identity, religion and ethnicity, and ethics and morality. These beliefs often combine dangerously with ideas of infallibility, self-importance and superiority. If people didn’t take these ideas so seriously, many wars might not have occurred. There has recently been a reduction in wars among developed nations; most likely a result of the spread of democracy and moderate views. Nevertheless, the ideas of nationalism, ethnicity and religion are still deeply ingrained in many places and are significant factors in current tensions and wars all over the world. If there were strong enough economic incentives developed nations would likely still enter into conflicts.

Recent conflicts have been complicated by the lack of clear boundaries between sides. With the ideas underlying conflicts often coming from ethnicity and religion, the boundaries become blurry and the groups of people diffuse. Non-state actors emerge in larger areas and populations. As military technology gets more powerful and accessible, people holding fringe ideas can exert more, threat, force and damage than they ever could before. Explosives are a glaring example of this.

Robots are the source of the current debate though, even though groups with access to advanced robots are still mostly limited to advanced militaries and corporations. The main concerns that surround the use of robots are: wars will likely be easier to start and more common as countries don’t risk their own casualties; and concerns that autonomous robots might be worse at discriminating civilians from combatants.

Robots will almost certainly make wars less unattractive, but whether there being less reluctant to take part in wars is actually a bad thing is somewhat dependent on the wars and conflicts that are entered into. Peacekeeping would be a great use of robots, though perhaps not robots of the “killer” variety. Horrific conflicts are happening right now, and developed countries intervene minimally or not at all because of issues such as low economic incentives, UN vetoes, and the certain loss of life they would sustain.

No doubt it would be possible to start wars; probably a less noble practice than interventions in civil war and genocide. However, initiating wars is no longer an easy thing to do secretively these days. The proliferation of digital media recording devices and the internet make it much harder for wars to not draw international attention. But perhaps more important is that most developed countries that possess robots are the liberal democracies, where there is more to the opposition of war than just the loss of soldiers’ lives. This opposition to war is a large source of negative sentiment people have for killer robots in the first place.

Even though the “more wars” issue is far from resolved, let’s turn our attention to the use of killer robots in the conflict itself.

First, from a technical perspective, robots will one day almost certainly be more capable and more objective in determining the combatant/non-combatant status of people than human soldiers. Also the robots aren’t at risk of dying in the same way as a person, the need to rush decisions and retaliate with lethal force is reduced. But let’s return to the idea-centric view of conflict, and consider the use of robots in conflicts such as the “War on Terror“.

The drones being used in Pakistan and Afghanistan are being used against people that believe in the oppression of women and death sentences for blasphemers–people who oppose many things considered universal rights by the West. It seems that to many it’s a forgone conclusion that the “bad guys” need to be killed, and the main issue using robots and drones is civilian casualties. However, a real problem is that many “civilians” share the beliefs underlying the conflict, and at any moment the only difference between a civilian and a combatant might be whether they are firing a weapon or carrying a bomb.

Robotic war technology may get to the point of perfect accuracy and discrimination, but the fact will remain that the “combatants” are regular people fighting for their beliefs. If “perfect” robotics weapons were created that were capable of immediately killing any person who plants a bomb or shoots a rifle, this would be an incredible tool for war, or rather, oppression. I think that kind of oppression would deserve a lot of concern.

In spite of something as oppressive as a ubiquitous army of perfect killer robots, people in possession of  the right (or wrong) mixture of ideas, and strong enough conviction, won’t likely give up. Suicide-bombers don’t let death dissuade them. Is oppression and violence even the best response to profoundly incompatible beliefs and ideas? Even ideas that, themselves, advocate oppression and violence?

Counter-insurgencies are not conventional wars. Belief and ideas are central to their cause–the combatants aren’t going to give-in because their leader is killed or their land taken. The conflict is unlikely to end if the fighting only targets people, it needs to target their beliefs and ideas. Hence the conceived strategy to win the “hearts and minds” of the people. Ideas are not “defeated” until there aren’t any people who still dogmatically follow them.

While robotics look to be the next revolution in military affairs in conflict between nation states and counter-insurgencies, improvements in technology and techniques for influencing beliefs that are the cause for war might be a better revolution. To that end, rather than having robots that kill, a productive use of robots could be to safely educate, debate with, and persuade violent opponents to change beliefs and come to a peaceful resolution. Making robots capable of functioning as diplomats might be a bigger technical challenge than making robots that can distinguish civilians from combatants. But let’s be fanciful.

It continues to be a great tragedy that the ideas that give rise to conflict are themselves are rarely put to the test. It’s unfortunate, but I think it’s no coincidence. Many of the most persistent ideas–the ideas people fight to defend–are put on pedestals: challenging the idea is treason, blasphemy or, even worse, politically incorrect. 😐

Information, interpretation and life

Despite the existence of information theory, a firm definition of information of doesn’t seem to exist. Consider that information is still being investigated as a broader philosophical notion in the philosophy of information. And while I claim no special expertise in these areas, and no doubt should acquaint myself more fully with them, I’m going to start writing about information.

Over the course of writing posts on information I’m going to ponder whether information might be a fundamental property, similar to energy and matter. In this post I’m going to argue that for information to exist, something must exist to interpret it, and I’ll describe an example of interpretation at the most fundamental level–the genome.

To start, I’ll work from my understanding of the philosophical description of information credited to Luciano Floridi: information can exist as embodied information (information as something), descriptive information (information about something), abstract information (information in something) and instructional information (information for something).  Without examples this is pretty vague, but what is worse, perhaps, is that some things fit multiple categories of information.

Take, for instance, a genome. As a collection of long molecule chains, it is a physical embodiment of “information”. We could imagine that with the right knowledge and analysis, we could get from it descriptive information about organisms with that genome. This descriptive information though, is an abstraction of certain patterns of repeating base pairs: the information is in the pattern. Lastly, the information in the genome is a set of instructions for the construction of an organism.

Where does interpretation come in? In our everyday lives, we often read and write, listen and talk, see and signal. When we do this we are interpreting incoming information and communicating in outputting information. This information can exist without an immediate recipient in recordings, e.g., books and blogs, audio messages and songs, and images and videos. However, if the information becomes corrupted–and ceases to be readable–the information is lost. Without the capability existing to interpret the information, it has no more meaning than random (or perhaps orderly) noise.

If we consider genomes as information, we should ask: what is interpreting that information? Complex molecular machinery physically interprets DNA in the replication process. However, because of the scale and fundamental nature of the atomic and molecular structures involved in the replication of DNA, the physical laws of our universe provide the basis of this interpretation. Our genomes are instructions, interpreted by enzymes operating under physical laws, to structure matter into living organisms.

 

Genomes are interpreted by molecules working in concert with the physical laws of matter and energy at the atomic scale. This information could, therefore, exist and be interpreted anywhere in the universe that these molecules exist and the physical laws are the same.  In this way, life could be described as the process of the universe interpreting and creating information. This notion will be explored further and refined in future posts.

[Edit: clarifications and grammatical corrections. (24/12/2012)]

Consciousness’s abode: Subjugate the substrate

Philosophy of mind has some interesting implications for artificial intelligence, summed up by the question: can a machine ever be “conscious”? I’ve written about this in earlier posts, but recently I’ve come across an argument of which I hadn’t considered very deeply: that substrate matters. There are lots of ways to approach this issue, but if the mind and consciousness is a product of the brain, then surely the  neuroscience perspective is a good place to start.

Investigations show that the activity of different brain regions occurs predictably during different cognitive and perceptual activities. Also there are predictable deficits that occur in people when these parts of the brain are damaged. This suggests that a mind and consciousness are a product of the matter and energy that makes up the brain. If you can tell me how classical Cartesian dualism can account for that evidence, I’m all ears. 🙂

I will proceed under the assumption that there isn’t an immaterial soul that is the source of our consciousness and directs our actions. But if we’re working under the main premise of physicalism, we still have at least one interesting phenomena to explain–“qualia“. How does something abstract and seemingly immaterial as our meaningful conscious experiences arise from our physical brain? That question isn’t going to get answered in this post (but an attempt is going to emerge in this blog).

In terms of conscious machines, we’re still confronted with the question of whether a machine is capable of a similar sort of conscious experience that we biological organisms are. Does the hardware matter? I read and commented on a blog post on Rationally Speaking, after reading a description of the belief that the “substrate” is crucial for consciousness. The substrate argument goes that even though a simulation of a neuron might behave the same as a biological neuron, since it is just a simulation, it doesn’t interact with the physical world to produce the same effect. Ergo no consciousness. Tell me if I’ve set up a straw-man here.

The author didn’t like me suggesting that we should consider the possibility of the simulation being hooked up to a machine that allowed it to perform the same physical interactions as the biological neuron (or perform photosynthesis in the original example). We’re not allowed to “sneak in” the substrate I’m told. 🙂 I disagree, I think it is perfectly legitimate to have this interaction in our thought experiment. And isn’t that what computers already do when they play sound or show images or accept keyboard input? Computers simulate sound and emission of light and interact with the physical world. It’s restricted I admit, but as technology improves there is no reason to think that simulations couldn’t be connected to machines that allow them to interact with the world as their physical equivalent would.

Other comments by readers of that Rationally Speaking post mentioned interesting points: the China brain (or nation) thought experiment, and what David Chalmers calls the “principle of organisational invariance“. The question raised by the China brain and discussed by Chalmers is: if we create the same functional organisation of people as neurons in a human brain (i.e., people communicating as though they were the neurons with the same connections) would that system be conscious? If we accept that the system behaved in the exact same way as the brain, that neurons spiking is a sufficient level of detail to capture consciousness, and the the principle of organisational invariance, the China brain should probably be considered conscious. Most people probably find that unintuitive.

If we accept that the Chinese people simulating a human brain also create a consciousness, we have a difficult question to answer; some might even call it a “hard problem“. 🙂 If consciousness is not dependent on substrate, it seems that consciousness might really be something that is abstract and immaterial. Therefore, we might be forced to choose between considering consciousness an illusion, or letting abstract things exist under our definition physicalism. [Or look for alternative explanations and holes in the argument above. :)]

Values and Artificial Super-Intelligence

Sorry, robot. Once the humans are gone, bringing them back will be hard.This is the sixth and final post on the current series on rewards and values. The topic discussed is the assessment of values as they could be applied to an artificial super-intelligence; what might be the final outcome and how might this help us choose “scalable” moral values.

First of all we should get acquainted with the notion of the technological singularity. One version of the idea goes: should we develop an artificial general intelligence that is capable of making itself more intelligent, it could do so repeatedly at accelerating speed. Before long the machine is vastly more intelligent and powerful than any person or organisation and essentially achieves god-like power. This version of the technological singularity appears to be far from a mainstream belief in the scientific community; however, anyone that believes consciousness and intelligence is solely the result of physical processes in the brain and body could rationally believe that this process could be simulated in a computer. It could logically follow that such a simulated intelligence could go about acquiring more computing resources to scale up some aspects of its intelligence and try to improve upon, and add to, the structure underlying its intelligence.

Many people that believe the that this technological singularity will occur are concerned that this AI could eliminate the human race, and potentially all life on Earth, for no more reason than we happen to be in the way of it achieving some goal. A whole non-profit organisation is devoted to trying to negate this risk. These people might be right in saying we can’t predict the actions of a super-intelligent machine – with the ability to choose what it would do, predicting its actions could require the same or a greater amount of intelligence. But the assumption usually goes that the machine will have some value-function that it will be trying to operate under and achieve the maximum value possible. This has been accompanied by interesting twists in how some people define a “mind” and there not being an obvious definition of “intelligence”. Nonetheless this concern has apparently caused some people at least one researcher being threatened with death. (People do crazy things in the name of their beliefs.)

A favourite metaphor for an unwanted result is the “paperclip maximiser“: a powerful machine devoted to turning all the material of the universe into paperclips. The machine may have wanted to increase the “order” of the universe and thought paperclips were especially useful to that end, and settled on turning everything into paperclips. Other values could result in equally undesirable outcomes, the same article describes another scenario where an alternative of utilitarianism might have the machine maximising smiles by turning the world and everything else into smiley faces. This is a rather unusual step for an “intelligent” machine; somehow the machine skipped any theory of mind and went straight to equating happiness with smiley faces. Nonetheless, other ideas of what we should value might not fare much better. It makes some sense to replace the haphazard way we seek pleasure with electrodes in our brains, if pleasure is our end goal. By some methods of calculation, suffering could best be minimised by euthanising all life. Of course, throughout this blog series I’ve been painting rewards and values (including their proxies pleasure and pain)  not as ends, but feedback (or feelings) we’ve evolved for the sake of learning how to survive.

If we consider the thesis of Sam Harris, that there is a “moral landscape“, then choosing a system of morals and values is a matter of optimisation. Sam Harris thinks we should be maximising the well-being of conscious creatures. And this belief of well-being as an optimisable quantity could lead us to consider morality as an engineering problem. Well-being, however, might be a little bit too vague for a machine to develop a function for calculating the value of all actions. Our intuitive human system of making approximate mental calculations of the moral value of actions might be very difficult for a computer to reproduce without simulating a human mind. And humans are notoriously irregular in their beliefs of what is moral behaviour.

In the last post I raised the idea of valuing what pleasure and pain teach us about ourselves and the world. This could be generalised to valuing all learning and information – valuing taking part in and learning the range of human experience such as music, sights, food, personal relationships, as well as learning about and making new scientific observations and discovering the physical laws of the universe. Furthermore, the physical embodiment of information within the universe as structures of matter and energy such as living organisms could also lead us to consider all life as inherently valuable too. Now this raises plenty of questions. What actually counts as information and how would we measure it? Can there really be any inherent value in information? Can we really say that all life and some non-living structures are the embodiment of information? How might valuing information and learning in as ends in themselves suggest we should live? What would an artificial super-intelligence do under this system of valuing information? Questions such as these could be fertile grounds for discussion. In starting a new series of blog posts I hope to explore the ideas and hopefully receive some feedback from anyone who reads this blog.

And thus ends this first blog series on rewards and values. A range of related topics were covered: the origin of felt rewards being within the brain, the representation of the world as an important aspect of associating values, the self-rewarding capabilities that might benefit autonomous robots, the likely evolutionary origin of rewards in biological organisms, and the development of morality and ethics as a process of maximising that which is valued. The ideas in some of these posts may not have been particularly rigorously argued or cited, so everything written should be taken with a grain of salt. Corrections and suggestions are most certainly welcome! I hope you will join me exploring more ideas and taking a few more mental leaps in future.

Simulating stimuli and moral values

This is the fifth post in a series about rewards and values. Previously the neurological origins for pleasure and reward in biological organisms were touched on, and the evolution of pleasure and the discovery of supernormal stimuli were mentioned. This post highlights some issues surrounding happiness and pleasure as ends to be sought.

First let’s refresh: we have evolved sensations and feelings including pleasure and happiness. These feelings are designed to enhance our survival in the world in which they were developed; the prehistoric world where survival was tenuous and selection favoured the “fittest”. This process of evolving first the base feelings of pleasure, wanting and desire, that later extended to the warm social feelings of friendship, attachment and social contact, couldn’t account the facility we now have for tricking these neural systems into strong, but ‘false’, positives. Things like drugs, pornography and facebook, all can deliver large doses of pleasure from directly stimulating the brain or simulating what had been evolved to be pleasurable experiences.

So where does that get us? In the world of various forms of utilitarianism we are usually trying to maximum some value. By my understanding, in plain utilitarianism the aim is to maximise happiness (sometimes described as increasing pleasure and reducing suffering), in hedonism the aim is sensual pleasure, and in preference utilitarianism it is the satisfaction of preferences. Pleasure may once have seemed like a good pursuit, but now that we have methods of creating pleasure at the push of a button, that hardly seems like a “good” way to live – being hooked up to a machine. And if we consider that our life-long search for pleasure as an ineffective process of trying to find out how to push our biological buttons, pleasure may seem like a fairly poor yardstick for measuring “good”.

Happiness is also a mental state that people have varying degrees of success in attaining. Just because we haven’t had the same success in creating happiness “artificially” it doesn’t mean that it is a much better end to seek. Of course the difficulty of living with depression is undesirable, but if we all could become happy at the push of a button the feeling might lose some value. Even the more abstract idea of satisfying preferences might not get us much further, since many of our preferences are for avoiding suffering and attaining pleasure and happiness.

Of course in all this we might be forgetting (or ignoring the perspective) that pleasure and pain were evolved responses to inform us of how to survive. And here comes a leap:

Instead of valuing feelings we could value an important underlying result of the feelings: learning about ourselves and the world.

The general idea of valuing learning and experience might not be entirely new; Buddhism has long been about seeking enlightenment to relieve suffering and find happiness. However, considering learning and gaining experience as valuable ends, and the pleasure, pain or happiness they might arouse as additional aspects of those experiences, isn’t something I’ve seen as part of the discussion of moral values. Clearly there are causes of pleasure and suffering that cause debilitation or don’t result in any “useful” learning, e.g., drug abuse and bodily mutilation, so these should be avoided. But where would a system of ethics and morality based on valuing learning and experience take us?

This idea will be extended and fleshed out in much more detail in a new blog post series starting soon. To conclude this series on rewards and values, I’ll describe an interesting thought experiment for evaluating systems of value: what would an (essentially) omnipotent artificial intelligence do if maximising those values?

Artificial Intelligence: That’s the myth

AIMythThe holy grail of artificial intelligence is the creation of artificial “general” intelligence. That is, an artificial intelligence that is capable of every sort of perceptual and cognitive function that humans are and more. But despite great optimism in the early days of artificial intelligence research, this has turned out to be a very difficult thing to create. It’s unlikely that there is a “silver bullet”, some single algorithm, that will solve the problem of artificial general intelligence. And an important reason why, is that the human brain, which gives us our intelligence, is actually a massive collection of layers and modules that perform specialised processes.

The squiggly stuff on the outside of the brain, the neocortex, does a lot of the perceptual processing. The neocortex sits on a lot of “white matter” that connects it to the inner brain structures.  Different parts of the inner brain perform important processes like give us emotions, pleasure, hold memories, and form the centre of many “neural circuits”. Even though the structure of the neocortex is quite similar in all areas over the brain, it can be pretty neatly divided up into different sections that perform specific functions like: allow us to see movement, recognising objects and faces, provide conscious control and planning of body movements, and modulating our impulses.

Until we see an example of an intelligent brain or machine that works differently, we should probably admit that replicating the processes, if not the structure, of the human brain is what is most likely to produce artificial general intelligence. I’ll be making posts that discuss specifically some different approaches to artificial intelligence. These posts will mostly be on the high-level concepts of the algorithms and their relationship to “intelligence”. Hopefully these posts will be generally accessible and still interesting to the technically minded. I think there is benefit in grasping important concepts that underlie human intelligence that could direct the creation of intelligent machines.

If people are still looking for that silver bullet algorithm, they should probably be looking for an algorithm that can either create, or be generally applied to, each of these brain processes. If you know of someone that has done this, or has rational grounds for disagreeing that this is necessary, let me know. Then I can stop spreading misinformation or incorrect opinion. 🙂

To conclude with some philosophical questions, if we are successful in reproducing a complete human intelligence (and mind) on a computer, some interesting issues are raised. Is an accurate simulation of a human mind on a computer that different from the “simulation” of the human mind in our brains? And how “artificial” is this computer-based intelligence?

These questions might seem nonsensical if you happen to think that human intelligence and the mind are unassailable by computer software and hardware. Or if you think that the mind is really the soul, separate from the body. First of all, if you believe the latter, I’m surprised you’re reading this (unless you were tricked by the title :)). If you read later posts, I hope to discuss some evidence against both of these points of view in future posts, and I welcome rational counter-arguments.