Friday, November 10, 2006

Zombies, AI and Temporal Lobe Epilepsy : towards a universal consciousness and behavioral grammar?

I was recently reading an article on Zombies about how the Zombie argument has been used against physicalism and in consciousness debates in general, and one quote by Descartes at the beginning of the article captured my attention :

Descartes held that non-human animals are automata: their behavior is explicable wholly in terms of physical mechanisms. He explored the idea of a machine which looked and behaved like a human being. Knowing only seventeenth century technology, he thought two things would unmask such a machine: it could not use language creatively rather than producing stereotyped responses, and it could not produce appropriate non-verbal behavior in arbitrarily various situations (Discourse V). For him, therefore, no machine could behave like a human being. (emphasis mine)

To me this seems like a very reasonable and important speculation: although we have learned a lot about how we are able to generate an infinite variety of creative sentences using the generative grammar theory of Chomsky (I must qualify, we only know how to create a new grammatically valid sentence-the study of semantics has not complimented the study in syntax - so we still do not know why we are also able to create meaningful sentences and not just grammatically correct gibberish like "Colorless green ideas flow furiously" : the fact that this grammatically correct sentence is still interpretable by using polysemy , homonymy or metaphorical sense for 'colorless', 'green' etc may provide the clue for how we map meanings -the conceptual Metaphor Theory- but that discussion is for another day), we still do not have a coherent theory of how and why we are able to produce a variety of behavioral responses in arbitrarily various situations.

If we stick to a physical, brain-based, reductionist, no ghost-in-the-machine, evolved-as-opposed-to-created view of human behavior, then it seems reasonable that we start from the premise of humans as an improvement over the animal models of stimulus-response (classical conditioning) or response-reinforcement (operant conditioning) theories of behavior and build upon them to explain how and what mechanism Humans have evolved to provide a behavioral flexibility as varied, creative and generative as the capacity for grammatically correct language generation. The discussions of behavioral coherence, meaningfulness, appropriateness and integrity can be left for another day, but the questions of behavioral flexibility and creativity need to be addressed and resolved now.

I'll start with emphasizing the importance of response-reinforcement type of mechanism and circuitry. Unfortunately most of the work I am familiar with regarding the modeling of human brain/mind/behavior using Neural Networks focuses on the connectionist model with the implicit assumption that all response is stimulus driven and one only needs to train the network and using feedback associate a correct response with a stimulus. Thus, we have an input layer for collecting or modeling sensory input, a hidden association layer and an output layer that can be considered as a motor effector system. This dissociation of input acuity, sensitivity representation in the form of input layer ; output variability and specificity in the form of an output layer; and one or more hidden layers that associate input with output and may be construed as an association layer maps very well to our intuitions of a sensory system, a motor system and an association system in the brain to generate behavior relevant to external stimuli/situations. However, this is simplistic in the sense that it is based solely on stimulus-response types of associations (the classical conditioning) and ignores the other relevant type of association response-reinforcement. Let me clarify that I am not implying that neural networks models are behavioristic: in the form of hidden layers they leave enough room for cognitive phenomenon, the contention is that they not take into account the operant conditioning mechanisms. Here it is instructive to note that feedback during training is not equivalent to operant-reinforcement learning: the feedback is necessary to strengthen the stimulus-response associations; the feedback only indicates that a particular response triggered by the particular stimuli was correct.

For operant learning to take place, the behavior has to be spontaneously generated and based on the history of its reinforcement its probability of occurrence manipulated. This takes us to an apparently hard problem of how behavior can be spontaneously generated. All our life we have equated reductionism and physicalism with determinism, so a plea to spontaneous behavior seems almost like begging for a ghost-in-the-machine. Yet on careful thinking the problem of spontaneity (behavior in absence of stimulus) is not that problematic. One could have a random number generator and code for random responses as triggered by that random number generator. One would claim that introducing randomness in no way gives us 'free will', but that is a different argument. What we are concerned with is spontaneous action, and not necessarily, 'free' or 'willed' action.

To keep things simple, consider a periodic oscillator in your neural network. Lets us say it has a duration of 12 hours and it takes 12 hours to complete one oscillation (i.e. it is a simple inductor-capacitor pair and it takes 6 hours for capacitor to discharge and another 6 hours for it to recharge) ; now we can make connections a priori between this 12 hr clock in the hidden layer and one of the outputs in the output layer that gets activated whenever the capacitor has fully discharged i.e. at a periodic interval of 12 hours. Suppose that this output response is labeled 'eat'. Thus we have coded in our neural networks a spontaneous mechanism by which it 'eats' at 12 hour durations.

Till now we haven't really trained our neural net, and moreover we have assumed a circuitry like a periodic oscillator in the beginning itself, so you may object to this saying this is not how our brain works. But let us be reminded that just like normal neurons in the brain which form a model for neurons in the neural network, there is also a suprachiasmatic nuclei that gives rise to circadian rhythms and implements a periodic clock.

As for training, one can assume the existence of just one periodic clock of small granularity, say 1 second duration in the system, and then using accumulators that code for how many ticks have elapsed since past trigger, one can code for any arbitrary periodic response of greater than one second granularity. Moreover, one need not code for such accumulators: they would arise automatically out of training from the other neurons connected to this 'clock' and lying between the clock and the output layer. Suppose, that initially, to an output marked 'eat' a one second clock output is connected (via intervening hidden neuron units) . Now, we have feedback in this system also. Suppose, that while training, we provide positive feedback only on 60*60*12 trials (and all its multiples) and provide negative feedback on all other trials, it is not inconceivable to believe that an accumulator neural unit would get formed in the hidden layer and count the number of ticks that come out of the clock: it would send the trigger to output layer only on every 60*60*12 th trial and suppress the output of the clock on every other trial. Viola! We now have a 12 hour clock (which is implemented digitally using counting ticks) inside our neural network coding for a 12 hour periodic response. We just needed to have one 'innate' clock mechanism and using that and the facts of 'operant conditioning' or 'response-reinforcement' pairing we can create an arbitrary number of such clocks in our body/brain. Also, please notice the fact, that we need just one 12 hour clock, but can flexibly code for many different 12 hour periodic behaviors. Thus, if the 'count' in accumulator is zero, we 'eat'; if the count is midway between 0 and 60*60*12, we 'sleep'. Thus, though both eating and sleeping follow a 12 hour cycle, they do not occur concurrently, but are separated by a 6 hour gap.

Suppose further, that one reinforcement that one is constantly exposed to and that one uses for training the clock is 'sunlight'. The circadian clock is reinforced, say only by the reinforcement provided by getting exposed to the mid noon sun, and by no other reinforcements. Then, we have a mechanism in place for the external tuning of our internal clocks to a 24 hour circadian rhythm. It is conceivable, that for training other periodic operant actions, one need not depend on external reinforcement or feedback, but may implement an internal reinforcement mechanism. To make my point clear, while 'eat' action, i.e. a voluntary operant action, may get generated randomly initially, and in the traditional sense of reinforcement, be accompanied by intake of food, which in the classical sense of the word is a 'reinforcement'; the intake of food, which is part-and-parcel of the 'eat' action should not be treated as the 'feedback' that is required during training of the clock. During the training phase, though the operant may be activated at different times (and by the consequent intake of food be intrinsically reinforced) , the feedback should be positive only for the operant activations inline with the periodic training i.e. only on trials on which the operant is produces as per the periodic training requirement; and for all other trails negative feedback should be provided. After the training period, not only would operant 'eat' be associated with a reinforcement 'food': it would also occur as per a certain rhythm and periodicity. The goal of training here is not to associate a stimulus with a response ( (not the usual neural networks association learning) , but to associate a operant (response) with a schedule(or a concept of 'time'). Its not that revolutionary a concept, I hope: after all an association of a stimulus (or 'space') with response per se is meaningless; it is meaningful only in the sense that the response is reinforced in the presence of the stimulus and the presence of the stimulus provides us a clue to indulge in a behavior that would result in a reinforcement. On similar lines, an association of a response with a schedule may seem arbitrary and meaningless; it is meaningful in the sense that the response is reinforced in the presence of a scheduled time/event and the occurrence of the scheduled time/event provides us with a reliable clue to indulge in a behavior that would result in reinforcement.

To clarify, by way of an example, 'shouting' may be considered as a response that is normally reinforcing, because of say its being cathartic in nature . Now, 'shouting' on seeing your spouse''s lousy behavior may have had a history of reinforcement and you may have a strong association between seeing 'spouse's lousy behavior' and 'shouting'. You thus have a stimulus-response pair. why you don't shout always, or while say the stimuli is your 'Boss's lousy behavior', is because in those stimulus conditions, the response 'shouting', though still cathartic, may have severe negative costs associated, and hence in those situations it is not really reinforced. Hence, the need for an association between 'spouse lousy behavior' and 'shouting' : only in the specific stimulus presence is shouting reinforcing and not in all cases.

Take another example that of 'eating', which again can be considered to be a normally rewarding and reinforcing response as it provides us with nutrition. Now, 'eating' 2 or 3 times in a day may be rewarding; but say eating all the time, or only on 108 hours periodicity may not be that reinforcing a response, because that schedule does not take care of our body requirements. While eating on a 108 hours periodicity would impose severe costs on us in terms of under nutrition and survival, eating on 2 mins periodicity too would not be that reinforcing. Thus, the idea of training of spontaneous behaviors as per a schedule is not that problematic.

Having taken a long diversion, arguing for a case for 'operant conditioning' based training of neural networks, let me come to my main point.

While 'stimulus' and the input layer represent the external 'situation' that the organism is facing, the network comprising of the clocks and accumulators represent the internal state and 'needs' of the organism. One may even claim, a bit boldly, that they represent the goals or motivations of the organism.

A 'eat' clock that is about to trigger a 'eat' response, may represent a need to eat. This clock need not be a digital clock, and only when the 12 hour cycle is completed to the dot, an 'eating' act triggered. Rather, this would be a probabilistic, analog clock, with the 'probability' of eating response getting higher as the 12 hour cycle is coming to an end and the clock being rest, whenever the eating response happens. If the clock is in the early phases of the cycle (just after an eating response) then the need for eating (hunger) is less; when the clock is in the last phases of the cycle the hunger need is strong and would likely make the 'eating' action more and more probable.

Again, this response-reinforcement system need not be isolated from the stimulus-response system. Say, one sees the stimulus 'food', and the hunger clock is still showing 'medium hungry'. The partial activation of the 'eat' action (other actions like 'throw the food', ignore the food, may also be activated) as a result of seeing the stimulus 'food' may win over other competing responses to the stimuli, as the hunger clock is still activating a medium probability of 'hunger' activation and hence one may end up acting 'eat'. This however, may reset the hunger clock and now a second 'food' stimulus may not be able to trigger 'eat' response as the activation of 'eat' due to 'hunger clock' is minimal and other competing actions may win over 'eat'.

To illustrate the interaction between stimulus-response and response-reinforcement in another way, on seeing a written word 'hunger' as stimulus, one consequence of that stimulus could be to manipulate the internal 'hunger clock' so that its need for food is increased. this would be simple operation of increasing the clock count or making the 'need for hunger' stronger and thus increasing the probability of occurrence of 'eat' action.

I'll also like to take a leap here and equate 'needs' with goals and motivations. Thus, some of the most motivating factors for humans like food, sex, sleep etc can be explained in terms of underlying needs or drives (which seem to be periodic in nature) and it is also interesting to note that many of them do have cycles associated with them and we have sleep cycles or eating cycles and also the fact that many times these cycles are linked with each other or the circadian rhythm and if the clock goes haywire it has multiple linked effects affecting all the motivational 'needs' spectrum. In a mainc pahse one would have low needs to sleep, eat etc, while the opposite may be true in depression.

That brings me finally to Marvin Minsky and his AI attempts to code for human behavioral complexity.

In his analysis of the levels of mental activity, he starts with the traditional if, then rule and then refines it to include both situations and goals in the if part.

To me this seems intuitively appealing: One needs to take into account not only the external 'situation', but also the internal 'goals' and then come up with a set of possible actions and maybe a single action that is an outcome of the combined 'situation' and 'goals' input.

However, Minsky does not think that simple if-then rules, even when they take 'gaols' into consideration would suffice, so he posits if-then-result rules.

To me it is not clear how introducing a result clause makes any difference: Both goals and stimulus may lead to multiple if-then rule matches and multiple actions activation. These action activations are nothing but what Minsky has clubbed in the result clause and we still have the hard problem of given a set of clauses, how do we choose one of them over other.

Minsky has evidently thought about this and says:

What happens when your situation matches the Ifs of several different rules? Then you’ll need some way to choose among them. One policy might arrange those rules in some order of priority. Another way would be to use the rule that has worked for you most recently. Yet another way would be to choose rules probabilistically.

To me this seems not a problem of choosing which rule to use, but that of choosing which response to choose given several possible responses as a result of application of several rules to this situation/ goal combination. It is tempting to assume that the 'needs' or 'gaols' would be able to uniquely determine the response given ambiguous or competing responses to a stimulus; yet I can imagine a scenario where the 'needs' of the body do not provide a reliable clue and one may need the algorithms/heuristics suggested by Minsky to resolve conflicts. Thus, I see the utility of if-then-result rules: we need a representation of not only the if part (goals/ stimulus) in the rule; which tells us what is the set of possible actions that can be triggered by this stimulus/ situation/ needs combo; but also a representation of the results part of the rule: which tells us what reinforcement values these response(actions) have for us and use this value-response association to resolve the conflict and choose one response over the other. This response-value association seems very much like the operant-reinforcement association, so I am tempted once more to believe that the value one ascribes to a response may change with bodily needs and rather is reflective of bodily needs, but I'll leave that assumption for now and instead assume that somehow we do have different priorities assigned to the responses ( and not rules as Minsky had originally proposed) and do the selection on the basis of those priorities.

Though I have posited a single priority-based probabilistic selection of response, it is possible that a variety of selection mechanisms and algorithms are used and are activated selectively based on the problem at hand.

This brings me to the critic-selector model of mind by Minsky. As per this model, one needs both critical thinking and problem solving abilities to act adaptively. One need not just be good at solving problems- one also has to to understand and frame the right problem and then use the problem solving approach that is best suited to the problem.

Thus, the first task is to recognize a problem type correctly. After recognising a problem correctly, we may apply different selctors or problem solving strategies to different problems.

He also posits that most of our problem solving is analogical and not logical. Thus, the recognizing problem is more like recognizing a past analogical problem; and the selecting is then applying the methods that worked in that case onto this problem.

How does that relate to our discussions of behavioral flexibility? I believe that every time we are presented with a stimulus or have to decide how to behave in response to that stimulus, we are faced with a problem- that of choosing one response over all others. We need to activate a selection mechanism and that selection mechanism may differ based on the critics we have used to define the problem. If the selection mechanism was fixed and hard-wired then we wont have the behavioral flexibility. Because the selection mechanism may differ based on our framing of the problem in terms of the appropriate critics, hence our behavioral response may be varied and flexible. At times, we may use the selector that takes into account only the priorities of different responses in terms of the needs of the body; at other times the selector may be guided by different selection mechanisms that involve emotions and values us the driving factors.

Minsky has also built a hierarchy of critics-selector associations and I will discuss them in the context of developmental unfolding in a subsequent post. For now, it is sufficient to note that different types of selection mechanisms would be required to narrow the response set, under different critical appraisal of the initial problem.

To recap, a stimulus may trigger different responses simultaneously and a selection mechanism would be involved that would select the appropriate response based on the values associated with the response and the selection algorithm that has been activated based on our appraisal of the reason for conflicting and competing responses. while critics help us formulate the reason for multiple responses to the same stimuli, the selector helps us to apply different selection strategies to the response set, based on what selection strategy had worked on an earlier problem that involved analogous critics.

One can further dissociate this into two processes: one that is grammar-based, syntactical and uses the rules for generating a valid behavioral action based on the critic and selector predicates and the particular response sets and strategies that make up the critic and selector clause respectively. By combining and recombining the different critics and selectors one can make an infinite rules of how to respond to a given situation. Each such rule application may potentially lead to different action. The other process is that of semantics and how the critics are mapped onto the response sets and how selectors are mapped onto different value preferences.

Returning back to the response selection, given a stimulus, clearly there are two processes at work : one that uses the stored if-then rules (the stimulus-response associations) to make available to us a set of all actions that are a valid response to the situation; and the other that uses the then-result rules (and the response-value associations, that I believe are dynamic in nature and keep changing) to choose one of the response from that set as per the 'subjective' value that it prefers at the moment. This may be the foundation for the 'memory' and 'attention' dissociations in working memory abilities used in stroop task and it it tempting to think that the while DLPFC and the executive centers determine the set of all possible actions (utilizing memory) given a particular situation, the ACC selects the competing responses based on the values associated and by selectively directing attention to the selected response/stimuli/rule.

Also, it seems evident that one way to increase adaptive responses would be to become proficient in discriminating stimuli and perceiving the subjective world accurately; the other way would be to become more and more proficient in directing attention to a particular stimulus/ response over others and directing attention to our internal representations of them so that we can discriminate between the different responses that are available and choose between them based on an accurate assessment of our current needs/ goals.

This takes me finally to the two types of consciousness that Hughlings-Jackson had proposed: subject consciousness and object consciousness.

Using his ideas of sensorimotor function, Hughlings-Jackson described two "halves" of consciousness, a subject half (representations of sensory function) and an object half (representations of motor function). To describe subject consciousness, he used the example of sensory representations when visualizing an object . The object is initially perceived at all sensory levels. This produced a sensory representation of the object at all sensory levels. The next day, one can think of the object and have a mental idea of it, without actually seeing the object. This mental representation is the sensory or subject consciousness for the object, based on the stored sensory information of the initial perception of it.

What enables one to think of the object? This is the other half of consciousness, the motor side of consciousness, which Hughlings-Jackson termed "object consciousness." Object consciousness is the faculty of "calling up" mental images into consciousness, the mental ability to direct attention to aspects of subject consciousness. Hughlings-Jackson related subject and object consciousness as follows:

The substrata of consciousness are double, as we might infer from the physical duality and separateness of the highest nervous centres. The more correct expression is that there are two extremes. At the one extreme the substrata serve in subject consciousness. But it is convenient to use the word "double."

Hughlings-Jackson saw the two halves of consciousness as constantly interacting with each other, the subjective half providing a store of mental representations of information that the objective half used to interact with the environment.


The term "subjective" answers to what is physically the effect of the environment on the organism; the term "objective" to what is physically the reacting of the organism on the environment.

Hughlings-Jackson's concept of subjective consciousness is akin to the if-then representation of mental rules.One needs to perceive the stimuli as clearly as possible and to represent them along with their associated actions so that an appropriate response set can be activated to respond to the environment. His object consciousness is the attentional mechanism that is needed to narrow down the options and focus on those mental representations and responses that are to be selected and used for interacting with the environment.

As per him, subject and object consciousness arise form a need to represent the sensations (stimuli) and movements (responses) respectively and this need is apparent if our stimulus-response and response-reinforcement mappings have to be taken into account for determining appropriate action.

All nervous centres represent or re-represent impressions and movements. The highest centres are those which form the anatomical substrata of consciousness, and they differ from the lower centres in compound degree only. They represent over again, but in more numerous combinations, in greater complexity, specialty, and multiplicity of associations, the very same impressions and movements which the lower, and through them the lowest, centres represent.

He had postulated that temporal lobe epilepsy involves a loss in objective consciousness (leading to automatic movements as opposed to voluntary movements that are as per a schedule and do not happen continuously) and a increase in subjective consciousness ( leading to feelings like deja-vu or over-consciousness in which every stimuli seems familiar and triggers the same response set and nothing seems novel - the dreamy state). These he described as the positive and negative symptoms or deficits associated with an epileptic episode.

It is interesting to note that one of the positive symptom he describes of epilepsy, that is associated with subjective consciousness of third degree, is 'Mania' : the same label that Minsky uses for a Critic in his sixth self-consciousness thinking level of thinking. The critic Minsky lists is :

Self-Conscious Critics. Some assessments may even affect one’s current image of oneself, and this can affect one’s overall state:

None of my goals seem valuable. (Depression.)
I’m losing track of what I am doing. (Confusion.)

I can achieve any goal I like! (Mania.)
I could lose my job if I fail at this. (Anxiety.)

Would my friends approve of this? (Insecurity.)

Interesting to note that this Critic or subjective appraisal of the problem in terms of Mania can lead to a subjective consciousness that is characterized as Mania.

If Hughlings-Jackson has been able to study epilepsy correctly and has been able to make some valid inferences, then this may tell us a lot about how we respond flexibly to novel/ familiar situations and how the internal complexity that is required to ensure flexible behavior, leads to representational needs in brain, that might lead to the necessity of consciousness.

Sphere: Related Content

No comments: