Learning Algorithms for People: Reinforcement Learning

In a previous post I described some simple ways in which learning could be performed algorithmically, replicating the basic process of supervised learning in machines. Of course it’s no secret that repetition is important for learning many things, but the exact way in which we repeat a training set or trial depends on what it is we are trying to learn or teach. I made a series of post on values and reinforcement learning machines and animals, so here I will describe a process for applying reinforcement learning to developing learning strategies. Perhaps more importantly though, I will discuss a significant notion in machine learning and its relationship to psychological results of conditioning — introducing the value function.

Let’s start with some pseudo-code for a human reinforcement learning algorithm that might be representative of certain styles of learning:

given learning topic, S, a set of assessment, T, and study plan, Q
    for each assessment, t in T
        study learning topic, S, using study plan, Q
        answer test questions in t
        record grade feedback, r
        from feedback, r, update study plan, Q 

This algorithm is vague on the details, but this approach of updating a study plan fits into the common system of education; one where people are given material to learn and given grades as feedback on their responses to assignments and tests.

Let’s come back to the basic definitions of computer-based reinforcement learning. The typical components in reinforcement learning are the state-space, S, which describes the environment, an action-space, A, are the options for attempting to transition to different states, and the value function, Q, that is used to pick an action in any given state. The reinforcement feedback, reward and punishment, can be treated as coming from the environment, and is used to update the value function.

The algorithm above doesn’t easily fit into this structure. Nevertheless, we could consider the combination of the learning topic, S, and the grade-system as the environment. Each assessment, t, is a trial of the study plan, Q, with grades, r, providing an evaluation of the effectiveness of study. The study plan is closely related to the value function — it directs the choices of how to traverse the state-space (learning topic).

This isn’t a perfect analogy, but it leads us to the point of reinforcement feedback: to adjust what is perceived as valuable. We could try to use a reinforcement learning algorithm whenever we are trying to search for the best solution for a routine or a skill, and all we receive as feedback is a success-failure measurement.

Though, coming back to the example algorithm provided, considering grades as the only reinforcement feedback in education is a terrible over-simplification. For example, consider the case of a school in a low socio-economic area where getting a good grade will actually get you punished by your peers. Or consider the case of a child that is given great praise for being “smart”. In a related situation, consider the case of praising a young girl for “looking pretty”. How is the perception of value, particularly self-worth, effected by this praise?

Children, and people in general, feel that the acceptance and approval of their peers is a reward. Praise is a form of approval, and criticism is a form of punishment, and each is informative of what should be valued. If children are punished for looking smart, they will probably value the acceptance of their peers over learning. If children are praised for being smart, they may end up simply avoiding anything that makes them look unintelligent. If children are praised for looking pretty, they may end up valuing looking pretty over being interesting and “good” people.

A solution could be to try to be more discerning about what we praise and criticise. The article linked above makes a good point about praising children for “working hard” rather than “being smart”. Children who feel that their effort is valued are more likely to try hard, even in the face of failure. Children who feel that praise will only come when they are successful, will try to avoid failure. Trying to give children self-esteem by praising them for being smart or pretty is, in fact, making their self-esteem contingent on that very thing they are being praised for.

It may seem manipulative, but praise and criticism are ways we can reward and punish people. Most people will adjust their “value-function”, their perception of what is valuable, and, as a result, they will adjust their actions to try to attain further praise, or avoid further punishment. What we praise and criticise ourselves for, is a reflection of what we value in ourselves. And our self-praise and self-criticism can also be used to influence our values and self-esteem, and hence our actions.

Advertisements

A View on the Nature of Consciousness

In the process of communicating with other bloggers that are interested in the future of humanity and philosophy of mind, some upcoming discussions have been planned on a number of related topics. The first topic is: the nature of consciousness and its relationship to the prospect of artificial intelligence. In preparation for the discussion, I’ve summarised my position on this topic here. I’ve spent some time reading and thinking about the nature of consciousness, so I believe that my position has some firm evidence and logical reasoning supporting it. If requested, I’ll follow up with post(s) and comment(s) providing more detailed descriptions of the steps of reasoning and evidence. As always, I’m open to considering compelling evidence and arguments that refute the points below.

The Nature of Consciousness

1. Consciousness can be defined as its ‘nature’. We seem to define consciousness by trying to describe our experience of it and how others show signs of consciousness or lack of consciousness. If we can successfully explain how consciousness occurs — its nature — we could then use that explanation as a definition. Nevertheless, for now we might use a broad dictionary definition of consciousness, such as “the awareness of internal (mental) and external (sensory) states”.

2. Consciousness is a spectrum. Something may be considered minimally consciousness if its only “aware” of external-sensory states. Higher consciousness includes awareness of internal-mental states, such as conscious thoughts and access to memories. Few animals, even insects, appear to be without “awareness” of memory (particularly spatial memory). As we examine animals of increasing intelligence we typically see a growing sets of perceptual and cognitive abilities — growing complexity in the range of awareness — though varying proficiencies at these abilities.

3. Biological consciousness is the result of physical processes in the brain. Perception and cognition are the result of the activity of localised, though not independent, functional groups of neurons. We can observe a gross relationship between brain structure and cognitive and perceptual abilities by studying structural brain differences animal species of various perceptual and cognitive abilities. With modern technology, and lesion studies, we can observe precise correlations between brain structures, and those cognitive and perceptual processes.

4. The brain is composed of causal structures. The collection of functional groups of neurons in the entire body (peripheral and central nervous system) are interdependent causal systems — at any moment neurons operate according to definable rules, effected by only the past and present states of themselves, their neighbours and surroundings.

5. Causal operation produces representation and meaning. Activity in groups of neurons have the power to abstractly represent information. Neural activity has “meaning” due to being the result of the chain of interactions that typically stretch back to some sensory interaction or memory. The meaning is most clear when neural activity represents external interactions with sensory neurons, e.g., a neuron in the primary visual cortex might encode for an edge of a certain orientation in a particular part of the visual field. There is also evidence for the existence of “grandmother cells”: neurons, typically in the temporal lobe of the neocortex, that activates almost exclusively in response to a very specific concept, such as “Angelina Jolie” (both a picture of the actress and her name).

6. Consciousness is an emergent phenomenon.  Consciousness is (emerges from) the interaction and manipulation of representations, which in biological organisms is performed by the structure of the complete nervous system and developed neural activity. Qualia are representations of primitive sensory interactions and responses. For example, the interaction of light hitting the photosensitive cells in the retina ends up represented as the activation of neurons in the visual cortex. It is potentially possible to have damage to the visual cortex and lose conscious awareness of light (though sometimes still be capable of blindsight). Physiological responses can result from chemicals and neural activity and represent emotions.

7. Consciousness would emerge from any functionally equivalent physical system. Any system that produces the interaction and manipulation of representations will, as a result, produce some form of consciousness. From a functional perspective, a perfect model of neurons, synapses and ambient conditions is not likely to be required to produce representations and interactions. Nevertheless, even if a perfect model of the brain was necessary (down to the atom), the brain and its processes, however complex, function within the physical laws (most likely even classical physics). The principle of universal computation would allow its simulation (given a powerful enough computer) and this simulation would fulfil the criteria above for being conscious.

8. Strong artificial intelligence is possible and would be conscious. Human-like artificial intelligence requires the development of human-equivalent interdependent modules for sensory interaction and perceptual and cognitive processing that manipulate representations. This is theoretically possible in software. The internal representations this artificial intelligence would possess, with processes for interaction and manipulation, would generate qualia and human-like consciousness.

Philosophical Labels

I’ve spent some time reading into various positions within the philosophy of mind, but I’m still not entirely sure where these views fit. I think there are close connections to:

a) Physicalism: I don’t believe there is anything other than that which is describable by physics. That doesn’t mean, however, that there aren’t things that have yet to be adequately described by physics. For example, I’m not aware of an adequate scientific description of the relationship between causation, representation and interpretation — which I think are possibly the most important elements in consciousness. Nevertheless, scientific progress should continue to expand our understanding of the universe.

b) Reductionism and Emergentism: I think things are the sum of their parts (and interactions), but that reducing them to the simplest components is rarely the best way to understand a system. It is, at times, possible to make very accurate, and relatively simple, mathematical models to describe the properties and functionality of complex systems. Finding the right level of description is important in trying to understand the nature of consciousness — finding adequate models of neuronal representations and interactions.

c) Functionalism: These views seem to be consistent with functionalism — consciousness is dependent on the function of the underlying structure of the nervous system. Anything that reproduces the function of a nervous system would also reproduce the emergent property of consciousness. For example, I think the ‘China brain’ would be conscious and experience qualia — it is no more absurd than the neurons in our brain being physically isolated cells that communicate to give rise to the experience of qualia.

Changing Views

I’m open to changing these views in light of sufficiently compelling arguments and evidence. I have incomplete knowledge, and probably some erroneous beliefs; however, I have spent long enough studying artificial intelligence, neuroscience and philosophy to have some confidence in this answer to “What is the nature of consciousness and its relationship to the prospect of artificial intelligence?”.

Please feel free to raise questions or arguments against anything in this post. I’m here to learn, and I will respond to any reasonable comments.

Learning algorithms for people: Supervised learning

Access to education is widely considered a human right, and, as such, many people spend years at school learning. Many of these people also spend a lot of time practising sport, musical instruments and other hobbies and skills. But how exactly do people go about trying to learn? In machine learning, algorithms are clearly defined procedures for learning. Strangely, though the human brain is a machine of sorts, we don’t really consider experimenting with “algorithms” for our own learning. Perhaps we should.

Machine learning is typically divided into three paradigms: supervised learning, reinforcement learning, and unsupervised learning. These roughly translate into “learning with detailed feedback”, “learning with rewards and punishments” and “learning without any feedback” respectively. These types of learning have some close relationships to the learning that people and animals already do.

Many people already do supervised learning, although probably much more haphazardly than a machine algorithm might dictate. Supervised learning  is good when the answers are available. So when practising for a quiz, or practising a motor skill, we make attempts, then try to adjust based on error we observe. A basic algorithm for people to perform supervised learning to memorise discrete facts could be written as:

given quiz questions, Q, correct answers, A, and stopping criteria, S
    do
        for each quiz question q in Q
            record predicted answer p
        for each predicted answer p
            compare p with correct answer, a
            record error, e
    while stopping criteria, S, are not met

Anyone could use this procedure for rote memorisation of facts, using a certain percentage of correct answers and a set time as the stopping criteria. However, this algorithm supposes the existence of questions associated with the facts to memorise. Memorisation can be difficult without a context to prompt recall and questions can also help links these facts together. Much like it being common for people to find recall better when knowledge is presented visually, aurally and in tactile formats. The machine learning equivalent would be adding extra input dimensions to associate with the output. Supervised learning also makes sense for trying to learn motor skills, this is roughly what many people do already when practising skills for sports or musical instruments.

It makes sense to use slightly different procedures for practising motor skills compared to doing quizzes. In addition to getting the desired outcome, gaining proficiency also requires the practising the technique of the skill.  Good outcomes can often be achieved with poor technique, and poor outcomes might occur with good technique. But to attain a high proficiency, technique is very important. To learn a skill well, it is necessary to pay attention not only to errors in the outcome, but also errors in the technique. For this reason, it is good to first spend time focusing practise on the technique. Once the technique is correct, focus can then be more effectively directed toward achieving the desired outcome.

given correct skill technique, T, and stopping criteria, S
    do
        attempt skill
        compare attempt technique to correct technique, T
        note required adjustments to technique
     while stopping criteria, S, not met

given desired skill outcome, O, and stopping criteria, S
     do
         attempt skill
         compare attempt outcome to desired outcome, O
         note required adjustments to skill
     while stopping criteria, S, are not met

These basic, general algorithms spell out the obvious of what many people already do: learn through repetition of phases of attempts, evaluations and adjustments. It’s possible to continue to describe current methods of teaching and learning as algorithms. And it’s also possible to search for optimal learning processes, characterising the learning algorithms we use, and the structure of education, to discover what is most effective. It may be that different people learn more effectively using different algorithms, or that some people could benefit from practising these algorithms to get better at learning. In future, I will try to write some further posts about learning topics and skills, and applications for different paradigms of learning, as well as algorithms describing systems of education.

I spy with my computer vision eye… Wally? Waldo?

Lately I’ve been devoting a bit of my attention to image processing and computer vision. It’s interesting to see so many varied processes applied to the problem over the last 50 or so years, especially when computer vision was once thought to be solvable in a single summer’s work. We humans perceive things with such apparent ease, it was probably thought that it would be a much simpler problem than playing chess. Now, after decades of focused attention, the attempts that appear most successful at image recognition of handwritten digits, street signs, toys, or even thousands of real-world images, are those that, in some way, model the networks of connections and processes of the brain.

You may have heard about the Google learning system that learned to recognise the faces of cats and people from YouTube videos. This is part of a revolution in artificial neural networks known as deep learning. Among deep learning architectures are ones that use many units that activate stochastically and clever learning rules (e.g., stochastic gradient descent and contrastive divergence). The networks can be trained to perform image classification to state-of-the-art levels of accuracy. Perhaps another interesting thing about these developments, a number of which have come from Geoffrey Hinton and his associates, is that some of them are “generative”. That is, while learning to classify images, these networks can be “turned around” or “unfolded” to create images, compress and cluster images, or perform image completion. This has obvious parallels to the human ability to imagine scenes, and the current understanding of the mammalian primary visual cortex that appears to essentially recreate images received at the retina.

A related type of artificial neural network that has had considerable success is the convolutional neural network. Convolution here is just a fancy term for sliding a small patch of network connections across the entire image to find the result at all locations. These networks also typically uses many layers of neurons, and has achieved similar success in image recognition. These convolutional networks may model known processes in the visual cortices, such as simple cells that detect edges of certain orientations. Outlines in images are combined into complex sets of features and classified. An earlier learning system, known as the neocognitron, used layers of simple cell-like filters without the convolution.

The process of applying the same edge-detection filter over the whole image is similar to the parallel processing that occurs in the brain. Though the thousands of neurons functioning simultaneously has an obvious practical difference to the sequential computation performed in the hardware of a computer; however, GPUs with many processor cores now allow parallel processing in machines. If rather than using direction selective simple cells to detect edges we use image features (such as a loop in a handwritten digit, or the dark circle representing the wheel of a vehicle), we might say the convolution process is similar to scanning an image with our eyes.

Even when we humans are searching for something hidden in a scene, such as our friend Wally (or Waldo), our attention typically centres on one thing at a time. Scanning large, detailed images for Wally often takes us a long time. A computer trained to find Wally in an image using a convolutional network could methodically scan the image a lot faster than us with current hardware. It mightn’t be hard to get a computer to beat us in this challenge for many Where’s Wally images with biologically-inspired image recognition systems (rather than more common, but brittle, image processing techniques).

Even though I think these advances are great, it seems there are things missing from what we are trying to do with these computer vision systems and how we’re trying to train them. We are still throwing information at these learning systems as the disembodied number-crunching machines they are. Though consider how our visual perception abilities allow us to recognise objects in images with little regard for scale, translation, shear, rotation or even colour and illumination; these things are major hurdles for computer vision systems, but for us, they just provide us more information about the scene. These are things we learn to do. Most of the focus of computer vision seems to be related to concept of the “what pathway”, rather than the “how pathway”, of two-streams hypothesis of vision processing in the brain. Maybe researchers could start looking at ways of making these deep networks take that next step. Though extracting information from a scene, such as locating sources of illumination or the motion of objects relative to the camera, might be hard to fit into the current trends of trying to perform unsupervised learning from enormous amounts of unlabelled data.

I think there may be significant advantages to treating the learning system as embodied, and make the real-world property of object permanence something the learning system can latch onto. It’s certainly something that can provide a great deal of leverage in our own learning about objects and how our interactions influence them. It is worth mentioning that machine learning practitioners already commonly create new numerous modified training images from their given set and see measurable improvements. This is similar to what happens when a person or animal is exposed to an object and given the chance to view it from multiple angles and under different lighting conditions. Having a series of contiguous view-points is likely to more easily allow parts of our brain to learn to compensate for different perspectives that scale, shear, rotate and translate the view of objects. It may even be important to learning to predict and recreate different perspectives in our imagination.

Fiona preview: Artificial minds “sparking together”

This post previews the new online autonomous avatar service and community, Fiona, that’s currently in Beta, and describe some of my thoughts regarding the concept. I don’t have access to the beta, but there are some interesting videos explaining the how the community may work and how the online avatars are currently developed and constructed. If you are interested, have a look at this video advertisement below, by the creators of Fiona:

 

In summary, the idea appears to be: to create a service in which you can create or buy an online avatar (or artificial mind?) to use on websites for customer service and interaction. This service is in conjunction with an online community for people to use and develop the avatars, composed of a collection of “sparks”–what appear to be units for perceptual or cognitive processes. Below is a video demonstration that has been released on how to create an avatar by connecting sparks in the online graphical interface. For people that have some experience with visual programming languages (e.g. MathWorks’ Simulink and LabView) there are some obvious visual similarities.

 

First of all, for the development of artificial intelligence, an online community could be a good thing for artificial intelligence in general. Since the community is pitched as also being a market for people to develop and sell their spark-programs, that could be an attractive incentive to participating. This sounds like it has the potential to generate interest in artificial intelligence, and provide a viable service for people and businesses that would like to have an interactive avatar on their website.

A more in depth review might be in order once Fiona is out of beta. It will be particularly interesting to see how the visual programming language and underlying framework translate into creating “artificial minds”. Will Fiona be flexible enough to implement any existing cognitive architectures? And will it detract from or be beneficial to other open source projects for artificial general intelligence development, such as OpenCog? Only time will tell.

Consciousness’s abode: Subjugate the substrate

Philosophy of mind has some interesting implications for artificial intelligence, summed up by the question: can a machine ever be “conscious”? I’ve written about this in earlier posts, but recently I’ve come across an argument of which I hadn’t considered very deeply: that substrate matters. There are lots of ways to approach this issue, but if the mind and consciousness is a product of the brain, then surely the  neuroscience perspective is a good place to start.

Investigations show that the activity of different brain regions occurs predictably during different cognitive and perceptual activities. Also there are predictable deficits that occur in people when these parts of the brain are damaged. This suggests that a mind and consciousness are a product of the matter and energy that makes up the brain. If you can tell me how classical Cartesian dualism can account for that evidence, I’m all ears. 🙂

I will proceed under the assumption that there isn’t an immaterial soul that is the source of our consciousness and directs our actions. But if we’re working under the main premise of physicalism, we still have at least one interesting phenomena to explain–“qualia“. How does something abstract and seemingly immaterial as our meaningful conscious experiences arise from our physical brain? That question isn’t going to get answered in this post (but an attempt is going to emerge in this blog).

In terms of conscious machines, we’re still confronted with the question of whether a machine is capable of a similar sort of conscious experience that we biological organisms are. Does the hardware matter? I read and commented on a blog post on Rationally Speaking, after reading a description of the belief that the “substrate” is crucial for consciousness. The substrate argument goes that even though a simulation of a neuron might behave the same as a biological neuron, since it is just a simulation, it doesn’t interact with the physical world to produce the same effect. Ergo no consciousness. Tell me if I’ve set up a straw-man here.

The author didn’t like me suggesting that we should consider the possibility of the simulation being hooked up to a machine that allowed it to perform the same physical interactions as the biological neuron (or perform photosynthesis in the original example). We’re not allowed to “sneak in” the substrate I’m told. 🙂 I disagree, I think it is perfectly legitimate to have this interaction in our thought experiment. And isn’t that what computers already do when they play sound or show images or accept keyboard input? Computers simulate sound and emission of light and interact with the physical world. It’s restricted I admit, but as technology improves there is no reason to think that simulations couldn’t be connected to machines that allow them to interact with the world as their physical equivalent would.

Other comments by readers of that Rationally Speaking post mentioned interesting points: the China brain (or nation) thought experiment, and what David Chalmers calls the “principle of organisational invariance“. The question raised by the China brain and discussed by Chalmers is: if we create the same functional organisation of people as neurons in a human brain (i.e., people communicating as though they were the neurons with the same connections) would that system be conscious? If we accept that the system behaved in the exact same way as the brain, that neurons spiking is a sufficient level of detail to capture consciousness, and the the principle of organisational invariance, the China brain should probably be considered conscious. Most people probably find that unintuitive.

If we accept that the Chinese people simulating a human brain also create a consciousness, we have a difficult question to answer; some might even call it a “hard problem“. 🙂 If consciousness is not dependent on substrate, it seems that consciousness might really be something that is abstract and immaterial. Therefore, we might be forced to choose between considering consciousness an illusion, or letting abstract things exist under our definition physicalism. [Or look for alternative explanations and holes in the argument above. :)]

Values and Artificial Super-Intelligence

Sorry, robot. Once the humans are gone, bringing them back will be hard.This is the sixth and final post on the current series on rewards and values. The topic discussed is the assessment of values as they could be applied to an artificial super-intelligence; what might be the final outcome and how might this help us choose “scalable” moral values.

First of all we should get acquainted with the notion of the technological singularity. One version of the idea goes: should we develop an artificial general intelligence that is capable of making itself more intelligent, it could do so repeatedly at accelerating speed. Before long the machine is vastly more intelligent and powerful than any person or organisation and essentially achieves god-like power. This version of the technological singularity appears to be far from a mainstream belief in the scientific community; however, anyone that believes consciousness and intelligence is solely the result of physical processes in the brain and body could rationally believe that this process could be simulated in a computer. It could logically follow that such a simulated intelligence could go about acquiring more computing resources to scale up some aspects of its intelligence and try to improve upon, and add to, the structure underlying its intelligence.

Many people that believe the that this technological singularity will occur are concerned that this AI could eliminate the human race, and potentially all life on Earth, for no more reason than we happen to be in the way of it achieving some goal. A whole non-profit organisation is devoted to trying to negate this risk. These people might be right in saying we can’t predict the actions of a super-intelligent machine – with the ability to choose what it would do, predicting its actions could require the same or a greater amount of intelligence. But the assumption usually goes that the machine will have some value-function that it will be trying to operate under and achieve the maximum value possible. This has been accompanied by interesting twists in how some people define a “mind” and there not being an obvious definition of “intelligence”. Nonetheless this concern has apparently caused some people at least one researcher being threatened with death. (People do crazy things in the name of their beliefs.)

A favourite metaphor for an unwanted result is the “paperclip maximiser“: a powerful machine devoted to turning all the material of the universe into paperclips. The machine may have wanted to increase the “order” of the universe and thought paperclips were especially useful to that end, and settled on turning everything into paperclips. Other values could result in equally undesirable outcomes, the same article describes another scenario where an alternative of utilitarianism might have the machine maximising smiles by turning the world and everything else into smiley faces. This is a rather unusual step for an “intelligent” machine; somehow the machine skipped any theory of mind and went straight to equating happiness with smiley faces. Nonetheless, other ideas of what we should value might not fare much better. It makes some sense to replace the haphazard way we seek pleasure with electrodes in our brains, if pleasure is our end goal. By some methods of calculation, suffering could best be minimised by euthanising all life. Of course, throughout this blog series I’ve been painting rewards and values (including their proxies pleasure and pain)  not as ends, but feedback (or feelings) we’ve evolved for the sake of learning how to survive.

If we consider the thesis of Sam Harris, that there is a “moral landscape“, then choosing a system of morals and values is a matter of optimisation. Sam Harris thinks we should be maximising the well-being of conscious creatures. And this belief of well-being as an optimisable quantity could lead us to consider morality as an engineering problem. Well-being, however, might be a little bit too vague for a machine to develop a function for calculating the value of all actions. Our intuitive human system of making approximate mental calculations of the moral value of actions might be very difficult for a computer to reproduce without simulating a human mind. And humans are notoriously irregular in their beliefs of what is moral behaviour.

In the last post I raised the idea of valuing what pleasure and pain teach us about ourselves and the world. This could be generalised to valuing all learning and information – valuing taking part in and learning the range of human experience such as music, sights, food, personal relationships, as well as learning about and making new scientific observations and discovering the physical laws of the universe. Furthermore, the physical embodiment of information within the universe as structures of matter and energy such as living organisms could also lead us to consider all life as inherently valuable too. Now this raises plenty of questions. What actually counts as information and how would we measure it? Can there really be any inherent value in information? Can we really say that all life and some non-living structures are the embodiment of information? How might valuing information and learning in as ends in themselves suggest we should live? What would an artificial super-intelligence do under this system of valuing information? Questions such as these could be fertile grounds for discussion. In starting a new series of blog posts I hope to explore the ideas and hopefully receive some feedback from anyone who reads this blog.

And thus ends this first blog series on rewards and values. A range of related topics were covered: the origin of felt rewards being within the brain, the representation of the world as an important aspect of associating values, the self-rewarding capabilities that might benefit autonomous robots, the likely evolutionary origin of rewards in biological organisms, and the development of morality and ethics as a process of maximising that which is valued. The ideas in some of these posts may not have been particularly rigorously argued or cited, so everything written should be taken with a grain of salt. Corrections and suggestions are most certainly welcome! I hope you will join me exploring more ideas and taking a few more mental leaps in future.