Wednesday, March 25, 2009

The bipolar phenotype: Excessive self-regulatory focus?

In my last post I had hinted that bipolar mania and depression may both be characterized by an excessive and overactive self-regulatory focus: with promotion focus being related to Mania and prevention focus being related to depression. It is important to pause and note that the bipolar propensity is towards more self-referential goal-directed activity resulting in excessive use of self-regulatory focus. To clarify, I am sticking my neck out and claiming that depression is marked by an excessive obsession with self-oriented goal directed activities- but with a preventive focus thus focusing more on self's responsibilities and duties , obligations etc with respect to other near and dear ones. Mania on the other hand, also has excessive self-oriented goal-directed focus, but the focus is promotional with obsession with hopes, aspirations etc, which are relatively more inward-focused and not too much dependent on significant others.

Thus, my characterization of depression as a state where regulatory reference is negative (one is focused on avoiding landing up in a negative end-state like being a burden on others), the regulatory anticipation is negative ( one anticipates pain as a result of almost any act one may perform and thus dreads day-to-day- activity) and the regulatory focus is negative (preventive focus whereby one is more concerned with duties and obligations to perform and security is a paramount need). The entire depressive syndrome can be summed up as an over activity of avoidance based mechanisms. However, please note that still there is an excess of self-referential/self-focused thinking and one is greatly motivated (although might be lacking energy) to bridge the differences between the real self and the 'ought' self. One can say that one's whole life revolves around trying to become the 'ought' self, or rather one conceptualizes oneself in terms of the 'ought' self.

Contrast this with Mania, where the regulatory reference is positive (one is focused on achieving something grandiose ) , regulatory anticipation is positive (one feels in control and believes that only good things can happen to the self) and regulatory focus is positive (promotional focus whereby one is more concerned with hopes, aspirations etc and growth / actualization needs). Still, juts like in depression there is an excess of focus on self and one is greatly motivated (and also has the energy) to bridge the difference between the real and the 'ideal' self. One can say that one's whole life revolves around trying to become the 'ideal' self , or rather one conceptualizes oneslef in terms of an 'ideal' self.

What can we predict from above: we know that brain's default network is involved in self-focused thoughts and ruminations. We can predict, and know for a fact, that the default network is overactive in schizophrenics (and thus by extension in bipolars who I believe have the same underlying pathology, at least as far as psychotic spectrum is concerned)and thus we can say with confidence that indeed the regulatory focus should be high for bipolars and this should be correlated with default network activity. We can also predict that during the Manic phase, the promotion focus related neural network should be more active and in depressive phase the prevention-related areas of the brain should be more active. this last hypothesis still needs experimentation, but lets backtrack a bit and first look at the neural correlates of the promotion and preventive regulatory self-focus.

For this, I refer the readers to an , in my view, important study that tried to dissociate the medial PFC and PCC activity (both of which belong to the default network) while people engaged in self-reflection. Here is the abstract of the study:

Motivationally significant agendas guide perception, thought and behaviour, helping one to define a ‘self’ and to regulate interactions with the environment. To investigate neural correlates of thinking about such agendas, we asked participants to think about their hopes and aspirations (promotion focus) or their duties and obligations (prevention focus) during functional magnetic resonance imaging and compared these self-reflection conditions with a distraction condition in which participants thought about non-self-relevant items. Self-reflection resulted in greater activity than distraction in dorsomedial frontal/anterior cingulate cortex and posterior cingulate cortex/precuneus, consistent with previous findings of activity in these areas during self-relevant thought. For additional medial areas, we report new evidence of a double dissociation of function between medial prefrontal/anterior cingulate cortex, which showed relatively greater activity to thinking about hopes and aspirations, and posterior cingulate cortex/precuneus, which showed relatively greater activity to thinking about duties and obligations. One possibility is that activity in medial prefrontal cortex is associated with instrumental or agentic self-reflection, whereas posterior medial cortex is associated with experiential self-reflection. Another, not necessarily mutually exclusive, possibility is that medial prefrontal cortex is associated with a more inward-directed focus, while posterior cingulate is associated with a more outward-directed, social or contextual focus.

The authors then touch upon something similar to what I have said above, that one can be too much planful or goal-directed (bipolar propensity) , but it would still make sense to find whether the focus is promotional or preventive. To quote:

The idea of variation in individuals’ regulatory focus highlights the difference between agendas and traits; two people could both be described by the trait ‘planful’, but planful about what? A person with a predominantly promotion focus would be more likely to be planful about attaining positive rewards or outcomes, while a person with a predominantly prevention focus would be more likely to be planful about avoiding negative events or outcomes. Although a promotion or prevention focus may dominate, the aspects of the self that are active change dynamically across situations (e.g. Markus and Wurf, 1987), thus most individuals have both promotion and prevention agendas. For example, the same person can hold both the hope of becoming rich (a promotion agenda) and the duty to support an aging parent (a prevention agenda), or the aspiration to be a good citizen and the obligation to be a well-informed voter. As individuals, hopes and aspirations and duties and obligations make up a large part of our mental life and constitute the motivational scaffolding for much of our behaviour.

Now comes the study design:

The present studies investigated neural activity when participants were asked to think about self-relevant agendas related to either a promotion (think about your hopes and aspirations) or prevention (think about your duties and obligations) focus. We compared neural activity associated with thinking about these two different types of self-relevant agendas and with thinking about non-self-relevant topics (distraction). We expected greater activity in anterior and/or posterior medial regions associated with these two self-reflection conditions compared with the distraction control condition because thinking about one's agendas, like thinking about one's traits, is self-referential. Such a finding would also be consistent, for example, with Luu and Tucker's (2004) proposal that both anterior cingulate and posterior cingulate cortex contribute to action regulation by representing goals and expectancies.

And this is what they found:

A double dissociation was found when participants were cued to think about promotion and prevention agendas on different trials for the first time during scanning (Experiment 2) and when they spent several minutes thinking about either promotion or prevention agendas before scanning (Experiment 1), indicating that it results from what participants are thinking about during the scan and not from some general effect (e.g. mood) carried over from the pre-scan period of self-reflection,

Here is what they discuss:

In short, the double dissociation between medial PFC and anterior/inferior medial posterior areas and our two self-reflection conditions indicates that these brain areas serve somewhat different functions during self-focus. There are a number of interesting possibilities that remain to be sorted out. Differential activity in these anterior medial and posterior medial regions as a function of the types of agendas participants were asked to think about could reflect: (i) differences in the representational content in the specific features of agendas, schemas, possible selves and so forth that constitute hopes and aspirations on the one hand and duties and obligations on the other (cf. Luu and Tucker, 2004); (ii) differences in the type(s) of component processes these agendas are likely to engage and/or the representational content they are likely to activate, for example, discovering new possibilities (hopes) vs retrieving episodic memories (e.g. Maddock et al., 2001) of past commitments (duties); (iii) differences in affective significance of hopes and aspirations (attaining the positive) and duties and obligations (avoiding the negative, Higgins, 1997; 1998); (iv) different aspects of the subjective experience of self, such as the subjective experience of control (an instrumental self) vs the subjective experience of awareness (an experiential self; Johnson, 1991; Johnson and Reeder, 1997; compare, e.g. Searle, 1992 and Weiskrantz, 1997, vs Shallice, 1978 and Umilta, 1988); (v) differences in the social significance of hopes and aspirations (more individual) and duties and obligations (involving others). This last possibility is suggested by findings linking the posterior cingulate with taking the perspective of another (Jackson et al., 2006). It may be that thinking about duties and obligations (a more outward focus) tends to involve more perspective-taking than does thinking about hopes and aspirations (a more inward focus). The greater number of mental/emotional references from the promotion group on the pre-scan essay and the tendency for a greater number of references to others from the prevention group are consistent with the hypothesis that medial PFC activity is associated with a more inward focus whereas posterior cingulate/precuneus activity is associated with a more outward, social focus. Clarifying the basis of the similarities and differences between neural activation associated with thinking about hopes and aspirations vs duties and obligations would begin to help differentiate the relative roles of brain regions in different types of self-reflective processing.

They do discuss clinical significance of their studies , but not in terms I would have loved to. I would like to see, whether there is state/trait hyperactivity and dissociation between the mPFC and PCC activation when the variable of depressive episode or manic episode subject is introduced. I'll place my bets that there would be an interaction between the type of episode and the over activity in the corresponding default-brain regions; but would like to see that data collected.

So my thesis is that the self-reflective and focused default network is overactive in biploar/psychotic spectrum people, but a bias or tilt towards promotion or preventive focus leads to their recurring and periodic episdoes of mania and depression.

Lastly let me touch upon affect in these state and what Higgins had to say about this in his paper covered yesterday. Higgins proposed that bipolar is due to a promotional focus, with mania induced when there is not much mismatch (or awareness of mismatch) between the ideal and real self; while depression or sadness and melancholia induced when one becomes aware of the discrepancy between the ideal and the real self. He proposes that 'ought' and real self discrepancy leads to anxiety and nervousness/ agitation; while a preventive focus and congruency between 'ought' and real leads to calmness/quiescence.

I disagree with his formulations, in as much as I differentiate between a regulatory focus and the corresponding awareness of discrepancies in that direction. To Higgins they are the same; if someone has a promotional focus , he would also be more aware of the discrepancies between his ideal and real self and thus be saddened. I disagree. I believe that if one has a promotional focus one is driven by goals to make the resl self as close to the ideal self as possible and if one is not able to do so, one would use defense mechanisms to delude oneself , but will not admit to its reality, as the reality of incongruence along the focused dimension is too painful. However, because on is consciously focused on promotions, one would be aware of trade-offs and will acknowledge to himself that his 'ought' self, which anyway is not too important for his self-concept, is not congruent to the real self. Thus, one wit a predominant promotion focus may be painfully aware of the discrepancy between his 'ought' and real self and thus might be nervous, agitated/ irritable- all symptoms of Mania.

A depressive person on the other hand has a predominant preventive focus and all actions/ ruminations are driven by responsibilities and obligations. Here acknowledging to oneself that one has failed in meeting obligations may be catastrophic so one will try to delude oneself that one is closer to the 'ought' self than is the case. However, one may not require any defense mechanisms when judging the discrepancy between the 'ideal' and real self as that 'ideal' self is no longer a matter of life and death! One would be aware that one is not focusing too much on hopes and aspirations and thus feel despondent/ sad/ melancholic - again classical symptoms of depression. Yet, despite the affect of sadness, all rumination would be focused on 'ought' self and thus the content be of guilt, duties, burden, responsibilities, etc.

I'm sure there is some grain of truth in my formulation, but wont be able to state emphatically unless the above proposed dissociation study involving default region and bipolar people is done. If one of you decide to do that, do let me know the results, even if they contradict the thesis.
Johnson, M. (2006). Dissociating medial frontal and posterior cingulate activity during self-reflection Social Cognitive and Affective Neuroscience, 1 (1), 56-64 DOI: 10.1093/scan/nsl004
Higgins, E. T. (1997). Beyond pleasure and pain American Psychologist (52), 1280-1300

Sphere: Related Content

Children not mini-adults, at least in cognitive control

A predominant, but unstated, thinking that biases many research paradigms is the assumption that children are just mini-adults with less well developed mechanisms than adults, but fundamentally using and relying on the same unitary cognitive mechanisms as the adults use. this has proven time and again wrong, and better psychologists now agree that children view the world in a fundamentally different manner from adults. I have covered some research in the past that showed for example that while differentiating between two color hues (categorical color perception), children show a more right hemisphere domination (non-verbal); while adults rely on Left hemisphere (verbal knowledge). Over development the RH processes are shadowed by the maturing LH verbal process, as far as it relates to Categorical Perception.

This recent PNAS article , by none other than the famed Chris Catham of the Developing Intelligence fame, is an effort in the same direction, showing that children use a different mechanism than adults when it comes to using cognitive control. while Adults use a more proactive cognitive control, the children rely on a reactive cognitive control. The authors do a good job of describing the proactive and reactive cognitive control so over to them:

Although sometimes derided as ‘‘creatures of habit,’’ humans develop an unparalleled ability to adaptively control thought and behavior in accordance with current goals and plans. Dominant theories of cognitive control suggest that this flexibility is enabled by the proactive regulation of behavior through sustained inhibition of inappropriate thoughts and actions , the active biasing of task-relevant thoughts, or construction of rule-like representations. Theories of the developmental origins of cognitive control converge in positing that children engage these same proactive processes, but in a weaker form, with less strength or stability , less resistance toward habitual responses, or degraded complexity.
However, children can be notoriously constrained to the present, raising the possibility that the temporal dynamics of immature cognitive control are fundamentally different from that of adults. Specifically, we hypothesized that young children may show ‘‘reactive’’ as opposed to ‘‘proactive’’ context processing , characterized by a failure to proactively prepare for even the predictable future and a tendency to react to events only as they occur, retrieving information from memory as needed in the moment. For lack of age-appropriate methods, the possibility of this qualitative developmental shift has not been directly tested.

They also describe the paradigm used beautifully so again quoting from the article:

In the AX-CPT, subjects provide a target response to a particular probe (‘‘X’’) if it follows a specific contextual cue (‘‘A’’). Nontarget responses are provided to other cue–probe sequences (‘‘A’’ then ‘‘Y,’’ ‘‘B’’ then ‘‘X,’’ or ‘‘B’’ then ‘‘Y’’), each occurring with lower probability than the target pair. This asymmetry in trial type frequency is critical for revealing distinct behavioral profiles for proactive versus reactive control. Proactive control supports good BX trial performance at the expense of AY trials. Maintenance of the ‘‘B’’ cue supports a nontarget response to the subsequent ‘‘X’’ probe; however, maintenance of the ‘‘A’’ cue leads to anticipation of an X and thus a target response (due to the expectancy effect cultivated by the asymmetry in trial type frequencies), which can lead to false alarms in AY trials . Reactive control leads to the opposite pattern. The preceding cue is retrieved when needed, that is, in response to ‘‘X’’ probes but not to ‘‘Y’’ probes. Such retrieval renders BX trials vulnerable to retrieval-based interference; the lack of such retrieval on AY trials means that false alarms are less likely in this case. Similarly, proactive control should lead to increased delay-period effort, whereas reactive control should lead to increased effort to probes.

What they found was consistent with their hypothesis. The reaction time data, the effort data gauged from puppilometry, the speed-accuracy trade off data all pointed to the fact that children used a reactive cognitive control mechanism while adults used a proactive cognitive control mechanism. This what they conclude:

By dissociating proactive and reactive control mechanisms in children, our findings call into question a previously untested assumption of developmental theories of cognitive control, that is, relative to young adults, weaker but qualitatively similar control processes guide the task performance of children. Of course, children and even infants may be capable of sustaining context representations over shorter delays than the 1.2 s used here, but such limited proactive mechanisms would seem unlikely to strongly influence most behaviors.

Further research is needed to determine the processes that drive the developmental transition from reactive to proactive control. This qualitative shift could reflect genuinely qualitative changes, for example, in metacognitive strategies that allow children to engage proactive control. Alternatively (or additionally), the underlying mechanisms for this qualitative shift could be continuous. For example, the gradual strengthening of task-relevant representations could allow proactive control to become effective, thus supporting a shift in the temporal dynamics of control. In any case, the developmental progression to be addressed is a shift from reactive to proactive control rather than merely positing incremental improvements with development.

I think these are steps in the right direction; I lean towards a stage theory account of development so am supportive of a dramatic developmental stage whereby reactive cognitive control mechanisms are replaced by proactive ones, although both strategies may be available to the critical age children equally. However, it may be the case that the neural architecture for proactive CC develops late (just like linguistic CP) and overrides the default reactive CC circuit. that dominance of Proactive CC over reactive CC to me should mark an important developmental stage.

Thanks Chris, for your wonderful blog posts and this paper!

Chatham, C., Frank, M., & Munakata, Y. (2009). Pupillometric and behavioral markers of a developmental shift in the temporal dynamics of cognitive control Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0810002106

Sphere: Related Content

Tuesday, March 24, 2009

Beyond pleasure and pain: promotion, prevention, desire and dread.

The hedonic principle says that we are motivated to approach pleasure and avoid pain. This, as per Higgins is too simplistic a formulation. He supplants this with his concepts of regulatory focus, regulatory anticipation and regulatory reference. That is too much of jargon for a single post, but let us see if we can make sense.

First, let us conceptualize a desired end-state that an organism wants to be in- say eating food and satisfying hunger. This desired end-state becomes the current goal of the organism and leads to gold-directed behavior. Now, it is proposed that given this desired end-state, the organism has two ways to go about achieving or moving towards the end-state. If the organism has promotion or achievement self-regulation focus, then it will be more sensitive to whether the positive outcome is achieved or not and will thus have an approach orientation whereby it would try to match his next state to the desired state or try approaching the desired end-sate as close as possible. On the other hand, if the organism has a prevention or safety self-regulation focus, then it will be more sensitive to the negative outcome as to whether it becomes worse off after the behavior and will have an avoidance orientation whereby it would try to minimize the mismatch between his next state and the desired state. Thus given n next states with different food availability , the person with promotion focus will choose a next state that is as close, say within a particular threshold, to the desired state of satiety ; while the person with the prevention focus will be driven by avoiding all the sates that have a sub-threshold food availability and are thus mis-matched with the end-goal of satiety. thus, the number and actual states which are available for choosing form are different for the two groups: the first set is derived from whether the states are within a particular range of the end-state; the second set is derived from excluding all the states that are not within a particular range of the end-state. Put this way it is easy to see, that these strategies of promotion or prevention focus, place different cognitive and computational demands: the former requires explortation/ maximizing, the other may be satisfied by satisficing. (see my earlier post on exploration/ exploitation and satisficers / maximisers where I believe I was slightly mistaken).

Now, that I have explained in simple terms (hopefully) the concepts of self-regulatory focus, let me quote from the article and show how Higgins arrives at the same.

The theory of self-regulatory focus begins by assuming that the hedonic principle should operate differently when serving fundamentally different needs, such as the distinct survival needs of nurturance (e.g., nourishment) and security (e.g., protection). Human survival requires adaptation to the surrounding environment, especially the social environment (see Buss, 1996). To obtain the nurturance and security that children need to survive, children must establish and maintain relationships with caretakers who provide them with nurturance and security by supporting, encouraging, protecting, and defending them (see Bowlby, 1969, 1973). To make these relationships work, children must learn how their appearance and behaviors influence caretakers' responses to them (see Bowlby, 1969; Cooley, 1902/1964; Mead, 1934; Sullivan, 1953). As the hedonic principle suggests,children must learn how to behave in order to approach pleasure and avoid pain. But what is learned about regulating pleasure and pain can be different for nurturance and security needs. Regulatory-focus theory proposes that nurturance-related regulation and security-related regulation differ in regulatory focus. Nurturance-related regulation involves a promotion focus, whereas security related regulation involves a prevention focus.
People are motivated to approach desired end-states, which could be either promotion-focus aspirations and accomplishments or prevention-focus responsibilities and safety. But within this general approach toward desired end-states, regulatory focus can induce either approach or avoidance strategic inclinations. Because a promotion focus involves a sensitivity to positive outcomes (their presence and absence), an inclination to approach matches to desired end-states is the natural strategy for promotion self-regulation. In contrast, because a prevention focus involves a sensitivity to negative outcomes (their absence and presence), an inclination to avoid mismatches to desired end-states is the natural strategy for prevention self-regulation (see Higgins, Roney, Crowe, & Hymes, 1994).

Figure 1 (not shown here, go read the article for the figure) summarizes the different sets of psychological variables discussed thus far that have distinct relations to promotion focus and prevention focus (as well as some variables to be discussed later). On the input side (the left side of Figure 1), nurturance needs, strong ideals, and situations involving gain-nongain induce a promotion focus, whereas security needs, strong oughts, and situations involving nonloss-loss induce a prevention focus. On the output side (the right side of Figure 1), a promotion focus yields sensitivity to the presence or absence of positive outcomes and approach as strategic means, whereas a prevention focus yields sensitivity to the absence or presence of negative outcomes and avoidance
as strategic means.
Higgins then goes on describing many experiments that support this differential regulations focus and how that is different from pleasure-pain valence based approaches. He also discusses the regulatory focus in terms of signal detection theory and here it is important to note that promotion focus leads to leaning towards (being biased towards) increasing Hits and reducing Misses ; while prevention focus means leaning more towards increasing correct rejections and reducing or minimizing false alarms. Thus,a promotion focus individual is driven by finding correct answers and minimizing errors of omission; while a preventive focused person is driven by avoiding incorrect answers and minimizing errors of commission. In Higgin's words:

Individuals in a promotion focus, who are strategically inclined to approach matches to desired end-states, should be eager to attain advancement and gains. In contrast, individuals in a prevention focus, who are strategically inclined to avoid mismatches to desired end-states, should be vigilant to insure safety and nonlosses. One would expect this difference in self-regulatory state to be related to differences in strategic tendencies. In signal detection terms (e.g., Tanner & Swets, 1954; see also Trope & Liberman, 1996), individuals in a state of eagerness from a promotion focus should want, especially, to accomplish hits and to avoid errors of omission or misses (i.e., a loss of accomplishment). In contrast, individuals in a state of vigilance from a prevention focus should want, especially, to attain correct rejections and to avoid errors of commission or false alarms (i.e., making a mistake). Therefore, the strategic tendencies in a promotion focus should be to insure hits and insure against errors of omission, whereas in a prevention focus, they should be to insure correct rejections and insure against errors of commission .

He next discusses Expectancy x Value effects in utility research. Basically , whenever one tries to decide between two or more alternative actions/ outcomes, one tries to find the utility of a particular decision/ behavioral act based on both the value and expectance of the outcome. Value means how desirable or undesirable (i.e what value is attached) that outcome is to that person. Expectancy means how probable it is that the contemplated action (that one is deciding to do) would lead to the outcome. By way of an example: If I am hungry, I want to eat food. Lets say there are two actions or decisions that have different utility that can lead to my hunger reduction. The first involves begging for food from the shopkeeper; the second involves stealing the food from the shopkeeper. The first may be having positive value (begging might not be that embarrassing) , but low expectancy (the shopkeeper is miserly and unsympathetic) ; while the second act may have negative value (I believe that stealing is wrong and would like to avoid that act) but high expectancy (I am sure I'll be able to steal the food and fulfill my hunger). the utility I impart to the two acts may determine what act I eventually decide to indulge in.

Higgins touches on research that showed that Expectancy X value have a multiplicative effect i.e as expectancy increases, and value increases the motivation to take that decision/ course of action increases non-linearly. He clarifies that this interaction effect is seen in promotion focus , but not in preventive focus:

Expectancy-value models of motivation assume not only that expectancy and value have an impact on goal commitment as independent variables but also that they combine multiplicatively (Lewin, Dembo, Festinger, & Sears, 1944; Tolman, 1955; Vroom, 1964; for a review, see Feather, 1982). The multiplicative assumption is that as either expectancy or value increases, the impact of the other variable on commitment increases. For example, it is assumed that the effect on goal commitment of higher likelihood of goal attainment is greater for goals of higher value. This assumption reflects the notion that the goal commitment involves a motivation to maximize the product of value and expectancy, as is evident in a positive interactive effect of value and expectancy. This maximization prediction is compatible with the hedonic or pleasure principle because it suggests that people are motivated to attain as much pleasure as possible.
Despite the almost universal belief in the positive interactive effect of value and expectancy, not all studies have found this effect empirically (see Shah & Higgins, 1997b). Shah and Higgins proposed that differences in the regulatory focus of decision makers might underlie the inconsistent findings in the literature. They suggested that making a decision with a promotion focus is more likely to involve the motivation to maximize the product of value and expectancy. A promotion focus on goals as accomplishments should induce an approach-matches strategic inclination to pursue highly valued goals with the highest expected utility, which maximizes Value × Expectancy. Thus, the positive interactive effect of value and expectancy assumed by classic expectancy-value models should increase as promotion focus increases.
But what about a prevention focus? A prevention focus on goals as security or safety should induce an avoid-mismatches strategic inclination to avoid all unnecessary risks by striving to meet only responsibilities that are clearly necessary. This strategic inclination creates a different interactive relation between value and expectancy. As the value of a prevention goal increases, the goal becomes a necessity, like the moral duties of the Ten Commandments or the safety of one's child. When a goal becomes a necessity, one must do whatever one can to attain it, regardless of the ease or likelihood of goal attainment. That is, expectancy information becomes less relevant as a prevention goal becomes more like a necessity. With prevention goals, motivation would still generally increase when the likelihood of goal attainment is higher, but this increase would be smaller for high-value goals (i.e., necessities) than low-value goals. Thus, the second prediction was that the positive interactive effect of value and expectancy assumed by classic expectancy value models would not be found as prevention focus increased. Specifically, as prevention focus increases, the interactive effect of value and expectancy should be negative.

And that is exactly what they found! the paper touches on many other corroborating readers and the interested reader can go to the source for more. Here I will now focus on his concepts of regulatory expectancy and regulatory reference.

Regulatory Reference is the tendency to be either driven by positive and desired end-states as a reference end-point and a goal; or to be driven by negative and undesired end-states as goals that are most prominent. For example, eating food is a desirable end-state; while being eaten by others is a undesired end-sate. now an organism may be driven by the end-sate of 'getting food' and thus would be regulating approach behavior of how to go about getting food. It is important to contrast this with regulatory focus; while searching for food, it may have promotion orientation focusing on matching the end state; or may have prevention focus i.e avoiding states that don't contain food; but it is still driven by a 'positive' or desired end-state. On the other hand, when the regulatory reference is a negative or undesirable end-state like 'becoming food', then avoidance behavior is regulated i.e. behavior is driven by avoiding the end-state. Thus, any state that keeps one away from 'being eaten' is the one that is desired; this may involve promotion focus as in approaching states that are opposite of the undesired state and provide safety from predator; or it may have a prevention focus as in avoiding states that can lead one closer to the undesired end-state. In words of Higgins:
Inspired by these latter models in particular, Carver and Scheier (1981, 1990) drew an especially clear distinction between self-regulatory systems that have positive versus negative reference values. A self-regulatory system with a positive reference value has a desired end state as the reference point. The system is discrepancy reducing and involves attempts to move one's (represented) current self-state as close as possible to the desired end-state. In contrast, a self-regulatory system with a negative reference value has an undesired end-state as the reference point. This system is discrepancy-amplifying and involves attempts to move the current self-state as far away as possible from the undesired end-state.

To me Regulatory Reference is similar to Value associated with a utility decision and determines whether when we are choosing between different actions/ goals , the end-states or goals have a positive connotation or a negative connotation.

That brings us to Regulatory anticipation: that is the now well-known Desire/ dread functionality of dopamine mediated brain regions that are involved in anticipation of pleasure and pain and drive behavior. This anticipation of pleasure or pain is driven by our Expectancies of how our actions will yield the desired/undesired outcomes and can be treated as the equivalent to Expectancy in the Utility decisions. The combination of independent factors of regulatory reference and regulatory anticipation will drive what end-state or goal is activated to be the next target for the organism. Once activated, its tendencies towards promotion focus or prevention focus would determine how it strategically uses approach/ avoidance mechanisms to archive that goal or move towards the end-state. Let us also look at regulatory anticipation as described by higgins:

Freud (1920/1950) described motivation as a "hedonism of the future." In Beyond the Pleasure Principle (Freud, 1920/1950), he postulated that people go beyond total control of the "id" that wants to maximize pleasure with immediate gratification to regulating as well in terms of the "ego" or reality principle that avoids punishments from norm violations. For Freud, then, behavior and other psychical activities were driven by anticipations of pleasure to be approached (wishes) and anticipations of pain to be avoided (fears). Lewin (1935) described how the "prospect" of reward or punishment is involved in children learning to produce or suppress, respectively, certain specific behaviors (see also Rotter, 1954). In the area of animal learning, Mowrer (1960) proposed that the fundamental principle underlying motivated learning was regulatory anticipation, specifically, approaching hoped-for desired end-states and avoiding feared undesired endstates. Atkinson's (1964) personality model of achievement motivation also proposed a basic distinction between self-regulation in relation to "hope of success" versus "fear of failure." Wicker, Wiehe, Hagen, and Brown (1994) extended this notion by suggesting that approaching a goal because one anticipates positive affect from attaining it should be distinguished from approaching a goal because one anticipates negative affect from not attaining it. In cognitive psychology, Kahneman and Tversky's (1979) "prospect theory" distinguishes between mentally considering the possibility of experiencing pleasure (gains) versus the possibility of experiencing pain (losses).

Why I have been dwelling on this and how this fits into the larger framework: Wait for the next post, but the hint is that I believe that bipolar mania as well as depression is driven by too much goal-oriented activity- in mania the focus being promotion; while in depression the focus being preventive; Higgins does discuss mania and depression in his article, but my views differ and would require a new and separate blog post. Stay tuned!
Higgins, E. T. (1997). Beyond pleasure and pain American Psychologist (52), 1280-1300

Sphere: Related Content

Monday, March 23, 2009

The first 30 seconds: Trustworthiness, Dominance and their neural correlates

A lot has already been written in the blogosphre regarding this study that found the brain regions that are involved in first impression formation. I view the study from a slightly different angle , but first let me introduce the study and its main findings.

The study was focused on finding the brain regions that are involved in the impression formation of a new social entity. We all know that we form automatic and consistent first impressions of strangers we meet based on things like their face to the social information that is available about them. The authors theorized that to know which regions of the brain are involved in evaluating a person for the first time, it would be sufficient to know which regions of the brain were engaged more while the evaluation-consistent information was being processed. To understand this logic, consider the brain regions involved in memory and how they are discovered. Typically, a series of words/images to be remembered are presented to the subjects, while simultaneously their brain are imaged. Later a memory recall/recognition test is administered. It is found that some brain regions are consistently more active during encoding of the original stimuli which are later recalled/ recognized correctly. This effect is know as Difference in Memory effect (DM effect). the fact that these areas are differentially engaged during encoding of remembered stimuli as opposed to forgotten stimuli is taken as evidence for the fact that these brain regions are involved in encoding of memory. Similar to this effect, it is found that evaluations that are consistent with the later overall evaluation of the person engage some brain regions more than when the evaluation is inconsistent with the later overall evaluation. This difference in evaluation effect (DE ) can be used to locate the regions that are involved in social evaluation or formation of first impressions.

Previous studies had indicated that dmPFC was engaged in social evaluation; however many cognitive factors other than purely evaluative factors might be in action here.

It has also been indicated that amygdala is involved in both social evaluation and valence based evaluations and might be involved in these first impression formation. So the authors hypothesized that they would find differential activity in amygdala in consistent as opposed to inconsistent evaluations and this is what they actually observed. They also found that PCC was also differentially engaged while forming first impressions and thus was another brain region involved in evaluating others.

Here is the study design:

To test these hypotheses, we developed the difference in evaluation procedure (see Figure), allowing us to sort social information encoding trials by subsequent evaluations. More specifically, we measured blood oxygenation level–dependent (BOLD) signals using whole brain fMRI during exposure to different person profiles. Each profile consisted of 6 person-descriptive sentences implying different personality traits. The sentences varied gradually in their positive to negative valence (or vice versa) but evoked equivalent levels of arousal. A 12-s interval with the face alone separated the positive and the negative segments. Subsequently, an evaluation slide instructed subjects to form their impression on an 8-point scale. On the basis of these evaluations, we determined which of the presented descriptive sentences guided evaluations (evaluation relevant) and which did not (evaluation irrelevant). For example, if a subject's evaluation was positive, we assigned the positive segment of the profile to the evaluation-relevant category and the negative segment to the evaluation-irrelevant category. We then identified the brain regions dissociating items from each category (that is, difference in evaluation effect). Notably, we correlated subjects' BOLD signal with their own individual evaluations. This allowed us to identify brain regions that were consistent across subjects in processing evaluation-relevant information regardless of the particular stimuli that they considered. Immediately after the scanning session, subjects underwent a memory-recognition task.

The results were clear and found that while dmPFC was involved in social evaluations it was not differentially engaged: thus it had a general role to play, perhaps holding the representation of evaluation after it had already been formed; in contrast both amygdala and PCC were differentially recruited and thus underlie the first time evaluations. In the words of the authors:

Understanding the neural substrates of social cognition has been one of the core motivations driving the burgeoning field of social neuroscience. A number of studies have highlighted the dmPFC in the processing of social information. Our results provide further evidence that the dmPFC is recruited to process person-descriptive information during impression formation. However, BOLD responses in this region do not dissociate evaluation-relevant from evaluation-irrelevant information, suggesting that the dmPFC is not essential for the evaluative component of impression formation. In fact, social evaluation recruits brain regions that are not socially specialized but are more generally involved in valuation and emotional processes.

Valuation and emotional processes, as a substantial amount of research has shown, are characteristic of the amygdala. In particular, the amygdala is considered to be a crucial region in learning about motivationally important stimuli. It is also implicated in social inferences that are based on facial and bodily expressions, in inferences of trustworthiness and in the capacity to infer social attributes. Moreover, the involvement of amygdala in social inferences might be independent of awareness or explicit memory. For example, increased amygdala responses were correlated with implicit, but not explicit, measures of the race bias, as well as with presentation of faces previously presented in an emotional, but not neutral, context, regardless of whether subjects could explicitly retrieve this information. Here we provide evidence linking the two domains of affective learning and social processing by showing that the amygdala is engaged in the formation of subjective value assigned to another person in a social encounter.

Although the amygdala is typically implicated in the processing of negative affect and negative stimuli have been shown to modulate it more than positive stimuli, we found that the amygdala processed both positive and negative evaluation-relevant information, suggesting that amygdala activity is driven by factors other than mere valence, such as the motivational importance or salience of the stimuli. This result is consistent with recent findings showing enhanced amygdala responses for both positive and negative stimuli as a function of motivational importance.

Evidence related to the PCC has been more diverse. There have been reports in the social domain, such as involvement in theory of mind and self-referential outward-focused thought33, in memory related processes such as autobiographical memory of family and friends34, and in emotional modulation of memory and attention. More recently, the PCC has been linked with economic decision making, the assignment of subjective value to rewards under risk and uncertainty, and credit assignment in a social exchange. A common denominator of these studies might be that all involved either a social or an outward-directed valuation component. Our task also encompasses these features, extending the role of the PCC to value assignment to social information guiding our first impressions of others.

The amygdala and the PCC are both interconnected with the thalamus as part of a larger circuitry that is implicated in emotion, arousal and learning. Beyond the known role of the amygdala and the PCC in social-information processing and value representation, our results suggest a neural mechanism underlying the online formation of first impressions. When encoding everyday social information during a social encounter, these regions sort information on the basis of its personal and subjective importance and summarize it into an ultimate score, a first impression. Other regions, such as the ventromedial PFC, the striatum and the insula, have also been implicated in valuation processes. However, these regions did not emerge in our difference in evaluation effect analysis. This might suggest a possible dissociation in the valuation network between regions engaged in the formation of value and its subsequent representation and updating. The latter regions would not be engaged during encoding and therefore would not show a difference in evaluation effect but would instead have an effect once the evaluation is formed. The amygdala and the PCC probably participate in both value formation and its representation. The difference in evaluation procedure may provide a useful tool for disentangling the different components of the valuation system and their specific contributions to social versus nonsocial evaluations.

Now I would like to link all this new research with an earlier research on face attributes that found that there were two orthogonal factors that characterize a face- trustworthiness (valence) and dominance. It is important to note that faces are an important mechanism by which we make snap judgments and if it has been found that there are two orthogonal dimensions (found using factor analysis) on which we judge faces and form rifts impressions, there is no reason to suppose that those same two orthogonal factors would not come into play when we form first impressions based on social information and not the face. What I am trying to say is that the non-face social information driven social evaluation would still be structured around the factors of whether the social information pointed to the person as Trustworthy or as Dominant. I would expect that there would be different brain regions specialized for these two functions: We all know too clearly that amygdala is specialized for trustworthiness judgments and that fits in with one of the areas that has been identified for snap judgments. thta leaves us with the PCC, which has normally been implicated in self-referential thinking with an outward and evaluative (as opposed to inward and executive) focus and also a preventive focus. It seems likely that this region would be used to evaluate a social other and judge as to whether he has the ability to execute, harm and dominate oneself. So, what I would like to see is a study that dissociates the scoial information provided to subjects in terms of trustworthiness and dominance factors and sees if there is a dissociation in the evaluative regions of amygdala and PCC; or maybe one can juts factor analyze the results of the original study and see if the same two factors emerge! I am excited,and would love to see these studies being preformed!!
Schiller, D., Freeman, J., Mitchell, J., Uleman, J., & Phelps, E. (2009). A neural mechanism of first impressions Nature Neuroscience DOI: 10.1038/nn.2278
Oosterhof, N., & Todorov, A. (2008). The functional basis of face evaluation Proceedings of the National Academy of Sciences, 105 (32), 11087-11092 DOI: 10.1073/pnas.0805664105

Sphere: Related Content

The varied causes of depression

There is a new review article in CMAJ about the neurobiology of depression. And then there is the multi-part series on depression over at Neurotopia by the excellent Sci.

So I thought I'll link these for the benefit of my readers. While it may sound an oxymoron to do a review of a review, let me briefly summarize the review article.

The article lists three important contributing factors for depression. The first is genetics; the second childhood stress and the third ongoing or recent psychosocial stress. And of course different neurobiological mechanisms underlie all three factors.

To take by way of example, we have the famed monoamine theory of depression whereby low baseline serotonin (and norepenipherine) levels in the brain are held responsible for depressive symptoms. This hypothesis derives most evidence from the effects of anti-depressants on the brain. Now depression also has a genetic heritable component (this is apparent from twin studies); some of the heritability of depression can be explained by polymorphisms of various genes affecting the serotonin system, primary among them being the gene affecting Serotonin Transporter or SERT. thus, the underlying serotonin system can be treated as one biological system that has a strong genetic component.

To take by way of second example, consider the hypothalamic-pituitary-adrenal axis that is involved in response to stress. This system is abnormally developed if the child is exposed to stress in a critical developmental window. Experiments with rats and monkeys confirm that abnormal and stressful environment during early childhood, leads to abnormal functioning of this axis, that later pre-disposes to depression. thus, this HPA axis may be taken as a proxy for the component that is due to development and epigenetics.

To take by way of third example, consider the Brain Derived Neurotropic factor in the brain. This BDNF is responsible for survival of new neurons and for new synapse formation (synaptic plasticity) during adulthood; new neurons and new synapses help us to learn (by neurogenisis in the hippocampus), especially when the environment is stressful; now there are two polymorphisms of the gene coding for BDNF; the 'MET' allele cause reduced hippocampal volume at birth, hypoactivity in resting state in hippocampus, increased metabolism in hippocampus while learning and relatively poor hiipocampal dependent memory-function. From all this it is apparent that MET allele somehow leads to less synthesis of BDNF and thus low learning in hippocampus as a result of reduced neurogenesis / synaptogenesis. Now, the same MET allele also raises the risk of depressionand the mediating factor is the stress responsivity of the individual. Thus, the BDNF may mediate the sensitivity of a person to the same external psychosocial stress and might be very crucial via the gene-environment interaction effects. Also prolonged stress, which may result in prolonged BDNF secretions and thus lead to toxicity and opposite paradoxical effects may be another putative mechanism linkibng stress exposure in adulthood to underlying pathophysiology of reduced neurogenesis.

The above may seem too simplistic but it points us in the right direction- some neurobiological systems like the serotonin system may be largely genetic in nature and our treatment approaches based around this fact. Others like the HPA axis malfunctioning may be entirely environmental in their origin, and maybe preventive interventions like ensuring stress free childhood for all, should be the policy focus here. Depending on the plasticity of later HPA axis, therapy or medications may be the treatment options. Finally, other neurobiological systems involved, like the BDNF and stress sensitivity/over-exposure, may display complex gene-environment interactions and again knowing the nature of these systems will help us counter the symptoms using a combination of CBT/ medication.

Depression is definitely a much complex disorder to be completely understood on the basis of a single review article, or even a series of blog posts, , but the underlying neurobiological mechanisms and systems clearly indicate how genetics, environment (especially during critical developmental window) and and epigenetics (gene-environment interactions) are involved in its etiology and how different interventions and treatments taking these into account have to be developed.
aan het Rot, M., Mathew, S., & Charney, D. (2009). Neurobiological mechanisms in major depressive disorder Canadian Medical Association Journal, 180 (3), 305-313 DOI: 10.1503/cmaj.080697

Sphere: Related Content

Sunday, March 22, 2009

Self relevance and the reality-fictional blur

There is a new study in PLOS One that argues that we make reality-fictional distinction on the basis of how personally relevant the event in question is. To be fair, the study focuses on fictional, famous or familiar (friends and family) entities like Cinderella, Obama or our mother and based on the fact that these are arranged in increasing order of personal relevance, as well as represent fictional and real characters, tries to show that one of the means by which we try to distinguish fictional from real characters is by the degree of personal relevance these characters are able to invoke in us.

The authors build upon their previous work that showed that amPFC(anterior medial prefrontal cortex) and PCC (Posterior Cingulate cortex), which are part of the default brain network, are differentially recruited when people are exposed to contexts involving real as opposed to fictional entities. From this neural correlate of the regions involved in distinguishing fiction from reality, and from the known functions of these brain regions in self-referential thinking and autobiographical memory retrieval, the authors hypothesized that the reality-fictional distinction may be mediated by the relevance to self and this difference in self-relevance leads to differential engagement of these brain areas. I quote form the paper:

In the first attempt to tackle this issue using functional magnetic resonance imaging (fMRI), we aimed to uncover which brain regions were preferentially engaged when processing either real or fictional scenarios . The findings demonstrated that processing contexts containing real people (e.g., George Bush) compared to contexts containing fictional characters (e.g., Cinderella) led to activations in the anterior medial prefrontal cortex (amPFC) and the posterior cingulate cortex (PCC).

These findings were intriguing for two reasons. First, the identified brain areas have been previously implicated in self-referential thinking and autobiographical memory retrieval. This suggested that information about real people, in contrast to fictional characters, may be coded in a manner that leads to the triggering of automatic self-referential and autobiographical processing. This led to the hypothesis that information about real people may be coded in more personally relevant terms than that of fictional characters. We do, after all, occupy a common social world and have a wider range of associations in relation to famous people. These may be spontaneously triggered and processed further when reading about them. A logical extension of this premise would be that explicitly self-relevant information should therefore elicit such processing to an even greater extent.

To study the above hypothesis they used an experimental study that used behavioral measures like reaction time, correctness and perceived difficulty of judging propositions involving fictional, famous and close entities. Meanwhile they also measured , using fMRI, the differential recruitment of brain areas as the subjects performed under the different entity conditions. The experimental design is best summarized by having a look at the below figure.

What they found was that for the control condition and the fictional condition the reaction time , correctness and perceived difficulty associated with the proposition was signifciantkly different (lower RT, lower correctness and more perceived difficulty) than for the famous and friend entities condition. Thus, from the behavioral data is was apparent that real characters were judged faster , accurately and more easily than fictional characters. The FMRI data showed that , as hypothesiszed, amPFC and PCC were recruited significantly more in personal relevance contexts and showed a gradient in the expected direction. The below figure should summariz the findings:

In particular, in line with our predictions, regions in and near the amPFC (including the ventral mPFC) and PCC (including the retrosplenial cortex) were modulated by the degree of personal relevance associated with the presented entities. These regions were most strongly engaged when processing high personal relevance contexts (friend-real), secondarily for medium relevance contexts (famous-real) and least of all in the low personal relevance contexts (fiction) (high relevance>medium relevance>low relevance).
The amPFC and PCC regions are known to be commonly engaged during autobiographical and episodic memory retrieval as well as during self-referential processing. Regarding their specific roles, there is evidence indicating that amPFC is comparatively more selective for self-referential processing whereas the PCC/RSC is more selective for episodic memory retrieval . The results of the present study contribute to the understanding of processes implemented in these regions by showing that the demands on autobiographical retrieval processes and self-referential mentation are affected by the degree of personal relevance associated with a processed scenario. It should additionally be noted that the extension of the activations in anterior and ventral PFC regions into subgenual cingulate areas indicates that the degree of personal relevance also modulated responsiveness in affective or emotional regions of the brain .

Here is what the authors have to say about the wider ramifications:

That core regions of the brain's default network are spontaneously modulated by the degree of stimulus-associated personal relevance is a consequential finding for two reasons. Firstly, the findings suggest that one of the factors that guide our implicit knowledge of what is real and unreal is the degree of coded personal relevance associated with a particular entity/character representation.
What this might translate to at a phenomenological level is that a real person feels more “real” to us than a fictional character because we automatically have access to far more comprehensive and multi-flavored conceptual knowledge in relation to the real people than fictional characters. This would also explain why a real person we know personally (a friend) feels more real to us than a real person who we do not know personally (George Bush).

I would say that there are other broader implications. First it is important to note that phenomenologically, Schizophrenia/psychosis is charachterized by an inability to distinguish reality from fiction. What is fictious also starts seeming real. A putative mechanism of why even fictional things start assuming 'real' dimensions may be the attribution of personal relevance or significance to those fictional entities. If something, even though fictional in nature, become highly personally relevant, then it would be easier to treat it as real. What ties things together is the fact that the default brain network is indeed overactive in the schizophrenics. If the PCC and amPFC are hyperactive, no wonder even fictional entities would be attributed personal relevance and incorporated into reality. I had earlier too discussed the delusions of reference with respect to default network hyperactivity in shizophrenics and this can be easily extended to now account for the loss of contact with reality , with the relevance and reality linkage in place. when everything is self relevant everything is real.

As always I am excited and would like some experiments done with schizophrnics/scizotypals using the same experimental paradigm and finding whether there is significant differences in the behavioral measures between controls and subjects and whether that is mediated by differential engagement of the default brain network. In autistics of course I hypothesize the opposite effects.

Needless to say I am grateful to Neuronarrative for reporting on this and helping me make one more puzzle piece fit in place.

Abraham, A., & von Cramon, D. (2009). Reality = Relevance? Insights from Spontaneous Modulations of the Brain's Default Network when Telling Apart Reality from Fiction PLoS ONE, 4 (3) DOI: 10.1371/journal.pone.0004741

Sphere: Related Content

Saturday, March 21, 2009

SES and the developing brain

I have written about poverty/SES and its effects on brain development/IQ earlier too,and this new review article by Farah and Hackman in TICS is a very good introduction to anyone interested in the issue.

The article reviews the behavioral studies that show that SES is correlated with at least the two brain systems of executive function and language abilities.It also review physiological data that shows that even when behavioral outcomes do not differ ERP can show differential activation in the brains of people with low and middle SES , thus suggesting that differences that may not be detected on behavioral measures may still exist. They also review (f)MRI data that shows no structural differences in the brains of low and middle SES children, but definite functional differences.they also review experimental manipulation of social status in labarotaories, and show how those studies also indicate that SES and executive function are correlated.

They then turn to the million dollar question of the direction of causality and for this infer indirectly based on the SES-IQ causal linkages.

What is the cause of SES differences in brain function? Is it contextual priming? Is it social causation, reflecting the influence of SES on brain development? Alternatively, is it social selection, in which abilities inherited from parents lead to lower SES? Current research on SES and brain development is not designed to answer this question. However, research on SES and IQ is relevant and supports a substantial role of SES and its correlated experience as causal factors.

Slightly less than half of the SES-related IQ variability in adopted children is attributable to the SES of the adoptive family rather than the biological. This might underestimate environmental influences because the effects of prenatal and early postnatal environment are included in the estimates of genetic influence. Additional evidence comes from studies of when poverty was experienced in a child's life. Early poverty is a better predictor of later cognitive achievement than poverty in middle- or late-childhood, an effect that is difficult to explain by genetics. SES modifies the heritability of IQ, such that in the highest SES families, genes account for most of the variance in IQ because environmental influences are in effect at ceiling in this group, whereas in the lowest SES families, variance in IQ is overwhelmingly dominated by environmental influences because these are in effect the limiting factor in this group. In addition, a growing body of research indicates that cognitive performance is modified by epigenetic mechanisms, indicating that experience has a strong influence on gene expression and resultant phenotypic cognitive traits . Lastly, considerable evidence of brain plasticity in response to experience throughout development indicates that SES influences on brain development are plausible.

Differences in the quality and quantity of schooling is one plausible mechanism that has been proposed. However, many of the SES differences summarized in this article are present in young children with little or no experience of school , so differences in formal education cannot, on their own, account for all of the variance in cognition and brain development attributable to SES. The situation is analogous to that of SES disparities in health, which are only partly explained by differential access to medical services and for which other psychosocial mechanisms are important causal factors .

The last point is really important and can be extended. Access to health services for low SES people may be a reason why , for eg, more schizophrenia incidence is found in low SES neighbourhoods. which brings us to the same chicken-and-egg question of the drift theory of schizophrenia- whether people with schizophrenia drift into low SES or low SES is a risk factor in itself. Exactly this point was brought to my attention when I was interacting with a few budding psychiatrists recently, this Martha Farah theory about the SES leading to lower IQ/ cognitive abilities. It is important to acknowledge that low SES not only leads to left hypo-frontality (another symptom of schizophrenia), schizophrenia is supposed to be due to lessened mylienation and again nutritional factors may have a role to play; also access to health care, exposure to chronic stress and lesser subjective feelings of control may all be mediating afctors that lead low SS to lead to schizophrenia/ psychosis.Also remember that schizophrenia is sort of a devlopmenetal disorder.

Well, I digressed a bit, but the idea is that not only does low SES affect 'normal' cognitive abilities, it may even increase the risk for 'abnormal' cognitive abilities that may lead to psychosis, and his effect of SES on IQ/cognitive abilities/ risk of mental diseases is mediated by the effect of SES on the developing brain. I have already covered the putative mechanisms by which SES may affect brain development, but just to recap, here I quote from the paper:

Candidate causal pathways from environmental differences to differences in brain development include lead exposure, cognitive stimulation, nutrition, parenting styles and transient or chronic hierarchy effects. One particularly promising area for investigation is the effect of chronic stress. Lower-SES is associated with higher levels of stress in addition to changes in the function of physiological stress response systems in children and adults. Changes in such systems are likely candidates to mediate SES effects as they impact both cognitive performance and brain regions, such as the prefrontal cortex and hippocampus, in which there are SES differences.

We can only hope that the evil of low SES is recognized as soon as possible and if for nothing else, than for advancing science, some intervention studies are done that manipulate the SES variables in the right direction and thus ensure that the full cognitive potential of the children flowers.

HACKMAN, D., & FARAH, M. (2009). Socioeconomic status and the developing brain Trends in Cognitive Sciences, 13 (2), 65-73 DOI: 10.1016/j.tics.2008.11.003

Sphere: Related Content

Wednesday, March 18, 2009

Encephalon #66 is now out!

Encepahlon #66 is now out at the Ionian enchantment and is an official no-frills no-fuss edition; It contains a motley collection of articles and some of the ones that caught my fancy were a Effortless Incitement commentary on a Daniel Neetle paper related to how likely it is that you know about your sibling's death, based on whether you are fully related or step-siblings or maternally/paternally related half-siblings. Another good article is on the spatial memory encoding by Neurophilosophy.

An unusual article worth checking out is Podblack cat's article exploring whether poetry is inspirational in nature and dependent on one-shot creative process (maybe having a long previous gestation period, but resulting in a dramatic giving-to-birth moment)or can be perfected with practice and hard work. Although I am a pretty hardliner adherent to practice as being superior to inborn talent/giftedness theory, I would still maintain that poetry is more of a un/subconscious skill and is not easily broken into explicit steps that can be first verbalized, practiced and then later gained expertise in- it is more like learning to ride a bicycle- you have to learn it and become good with practice, but you cannot really teach much there. I respect Stephen Fry a lot, and would definitely read the Ode Less Traveled, but I'm not sure I completely buy the theory that you can really teach poetry! I write poetry myself and so am entitled to my own opionion on that front!

Sphere: Related Content

Tuesday, March 17, 2009

Why is Science Important: the Film

As many of you might know, I had contributed to the web based project 'Why is science important' run by Alom Shaha. You can read my contributions here.The web site was a vehicle to the final target of filming the myriad reasons why people involved with science find it interesting/ important. The film has now been completed and is available in its entirety at the why is science important website. It has been beautifully made and deserves wide circulation.

I am also trying to embed the video here, but beware that you need to turn HD off if you are on a slow connection, else the download rate would be pathetic.

Why is Science Important? from Alom Shaha on Vimeo.

I would also like to add a small note that while I wholeheartedly agree with Alom that creativity and curiosity are what make Science interesting and important, I would still like to stress the 'truth' factor on which I dwelt in my original posting. In my view, a delusional person also very creatively constructs an elaborate delusional framework to make sense of his experiences, and he is indeed very curious to find out and explain things, but lacking any insight/ grounding in reality can't be deemed to be scientific. Science has to be seen from the prism of the most approximate construal of reality that we have, to distinguish it from other similarly creative and curiosity guided phenomenon like the pseudosciences of astrology, religion (which satisfy existential curiosity and are very creative at times in their tenets, beliefs etc) etc. To me, science still stands for the only method that can bring me closer to the 'truth' and thus help me deal most pragmatically with the external reality.

But do read the different views that are presenetd excellently and deftly by Alom. Kudos to him for taking such an initiative. If it were to me, I would mandate that this film be shown to all school children and teachers who are interested in Science to give them a broader perspective. Please do link from your blogs or circulate otherwise.

Sphere: Related Content

Tuesday, March 10, 2009

The factor structure of Religiosity and its neural substrates

A new article in PNAS by Grafman et al, argues that Religiosity can be broken down into three factors and the underlying machinery that these factors use are basic Theory Of Mind (ToM) circuitry, thus substantiating the claim that religion occurred as a byproduct of basic ToM related adaptations, although not ruling out that once established Religion may have provided adaptive advantage.

First a detour. I am more interested in this study as I had once claimed that Schizophrenics were more religious than Autistics and I have been maintaining that Religion is just one aspect of an underlying hyper-mentalizing to hyper-physicalism continuum on which these two spectrum disorders lie on opposite ends. The case for less ToM abilities in ASD seems to be fairly settled; its also becoming apparent that in Schizophrenia spectrum disorders you have excess of ToM abilities; This study by showing the ToM to Religion linkage, fills in the gaps and another puzzle piece falls in place.

On to the study. The authors first show that Religious Belief can be split into three factors. they use a novel (to me) technique of Multi Dimensional Scaling (MDS) to tease out the factors associated with religious belief. I have not checked how MDS works, but I assume it is similar to Factor analysis and can give us reliable factor structure underlying the data. They build on previous research and discovered the following three factors:

  1. God’s perceived level of involvement,
  2. God’s perceived emotion, and
  3. religious knowledge source. 
The first factor refers to endowing intentionality to superantural agents like God; the second factor refers to endowing emotions to God an dthe thierd factor refers to the source of the religious beliefs- whether it is doctrinal or derived from experience. Thus the trinity of intention, emotion and belief - alos the trinity involved in ToM tasks. The authors do a good job of describing the factors, so I'll let them do it.

Dimension 1 (D1) correlated negatively with God’s perceived level of involvement (–0.994), Dimension 2 (D2) correlated negatively with God’s perceived anger (–0.953) and positively with God’s perceived love (0.953), and Dimension 3 (D3) correlated positively with doctrinal (0.993) and negatively with experiential (–0.993) religious content. D1 represents a quantitative gradient of a single concept and we will be referring to it as ‘‘God’s perceived level of involvement.’’ D2 and D3 represent gradients of contrasting concepts; we will be referring to them as ‘‘God’s perceived emotion’’ (D2) and ‘‘religious knowledge source’’ (D3).

God’s perceived level of involvement (D1) organizes statements so that ‘‘God is removed from the world’’ or ‘‘Life has no higher purpose’’ have high positive coordinate values, while ‘‘God’s will guides my acts,’’ ‘‘God protects one’s life,’’ or ‘‘God is punishing’’ have high negative values. Generally speaking, on the positive end of the gradient lie statements implying the existence of uninvolved supernatural agents, and on the negative end lie statements implying involved supernatural agents.

God’s perceived emotion (D2) ranges from love to anger and organizes statements so that ‘‘God is forgiving’’ and ‘‘God protects all people’’ have high positive-coordinate values, while ‘‘God is wrathful’’ and ‘‘The afterlife will be punishing’’ have high negative values. Generally speaking, on the positive end of the gradient lie statements implying the existence of a loving (and potentially rewarding) supernatural agent, and on the negative end lie statements suggestive of wrathful (and potentially punishing) supernatural agent.

Religious knowledge (D3) ranges from doctrinal to experiential and organizes statements so that ‘‘God is ever-present’’ and ‘‘A source of creation exists’’ have high positive-coordinate values, while ‘‘Religion is directly involved in worldly affairs’’ and ‘‘Religion provides moral guiding’’ have high negative values. Generally speaking, on the positive end of the gradient lies theological content referring to abstract religious concepts, and on the negative end lies theological content with moral, social, or practical implications.

This breakup of religiosity into three factors is itself commendable, but then they go on to show, using fMRI data that these factors activate areas of brain associated with ToM abilities. I don't really understand all their fMRI data, but the results seem interesting. Here is what they conclude:

The MDS results confirmed the validity of the proposed psychological structure of religious belief. The 2 psychological processes previously implicated in religious belief, assessment of God’s level of involvement and God’s level of anger (11), as well as the hypothesized doctrinal to experiential continuum for religious nowledge, were identifiable dimensions in our MDS analysis. In addition, the neural correlates of these psychological dimensions were revealed to be well-known brain networks, mediating evolutionary adaptive cognitive functions.

This study defines a psychological and neuroanatomical framework for the (predominately explicit) processing of religious belief. Within this framework, religious belief engages well-known brain networks performing abstract semantic processing, imagery, and intent-related and emotional ToM, processes known to occur at both implicit and explicit levels (36, 39, 50). Moreover, the process of adopting religious beliefs depends on cognitive-emotional interactions within the anterior insulae, particularly among religious subjects. The findings support the view that religiosity is integrated in cognitive processes and brain networks used in social cognition, rather than being sui generis (2–4). The evolution of these networks was likely driven by their primary roles in social cognition, language, and logical reasoning (1, 3, 4, 51). Religious cognition likely emerged as a unique combination of these several evolutionarily important cognitive processes (52). Measurable individual differences in these core competencies (ToM, imagination, and so forth) may predict specific patterns of brain activation in response to religious stimuli.

As always I am excited and would like to see some field work being carried out to determine religiosity in ASD and Schizophrenia spectrum groups and see if we get the same results (less religiosity in autism and more religiosity in schizophrenics) as predicted, based on their baseline ToM abilities.

PS: I was not able to use the DOI lookup fetaure of Research Blogging, but the DOI of article is
* Dimitrios Kapogiannis,, * Aron K. Barbey,, * Michael Su,, * Giovanna Zamboni,, * Frank Krueger,, * and Jordan Grafman (2009). Cognitive and neural foundations of religious belief PNAS

Sphere: Related Content

Monday, March 09, 2009

Evidence for heightened Agency in Schizophrenia

I have been maintaining that Autism and Schizophrenia are opposites on a continuum and one dimension on which they differ is Agency , with autistics attributing too less agency to themselves (and others), while schizophrenics attributing too much agency to themselves (and others).

The case for people with ASD is fairly settled. They have deficits in theory Of Mind (ToM) and one mechanism by which this deficit seems to arise is via their attributing less agency to themselves as well as others.

For Schizophrenics too, it was speculated that they have problems with agency , but a clear illustration that they have an enhanced agency attribution device was not firmly established. This study, which dates back to 2003, in my opinion, establishes the fact that their is hyper-agency attribution (or hyper-self-menatlizing) in schizophrenics.

The study in question is one by Haggard et al , and it uses an experimental paradigm to illustrate that schizophrenics indeed have problems with self- agency attribution, and that too in the hypothesized direction.

Here is the abstract:

An abnormal sense of agency is among the most characteristic yet perplexing positive symptoms of schizophrenia. Schizophrenics may either attribute the consequences of their own actions to the intentions of others (delusions of influence), or may perceive themselves as causing events which they do not in fact control (megalomania).Previous reports have often described inaccurate agency judgments in schizophrenia, but have not identified the disordered neural mechanisms or psychological processes underlying these judgments.We report the perceived time of a voluntary action and its consequence in eight schizophrenic patients and matched controls.The patients showed an unusually strong binding effect between actions and consequences. Specifically, the temporal interval between action and consequence appeared shorter for patients than for controls. Patients may overassociate their actions with subsequent events, experiencing their actions as having unusual causal efficacy.Disorders of agency may reflect an underlying abnormality in the experience of voluntary action.

Now, let us pause and recollect that Chris Frith had postulated that the voluntary action mechanism in Scizophrenics is somewhat malformed and specifically there is a disconnect between intention attribution and voluntary action manifestation. He however had not clearly stated that there would be over-attribution of intention to voluntary actions. We all know that dopamine is associated with voluntary action (voluntary movements) and that baseline dopamine is in excess in schizophrenics. This paper ties things in together showing that excess dopamine secretion in basal ganglia and cortical areas may lead to greater biding between intentions and subsequent actions (consequences) and by this mechanism may lead to over-attribution of agency. Of course the paper doe snot establish this mechanism but just speculates on it as one of the possible mechanisms. It is also important to pause and note that schizophrenics have a jumping-to-conclusions bias and thus if an intention and action were more tightly bound (occurred in time in close proximity)_, then they are more likely to judge the two events to be related and the intention to cause the action.

Now let me get to the actual experiment. Haggard et al asked schizophrenics as well as matched controls to note subjective time (using Libets approach) when they decided to voluntarily press a computer key, and also subjective time when they first heard an auditory tone . The tone was presented 250 ms after their voluntary key press. As has been established earlier, and using controls in this experiment, people advance the key press in future (shift it towards future time from the exact time they actually pressed the key) so that subjectively the key press happens after some time form the objective key press and in the direction of the tone presentation. Thus, the effective subjective time between the key press and the tone is reduced. This binding between a voluntary action and its consequence , happens in normal individuals too, but in schizophrenics this happened significantly more in magnitude ans was dependent on two factors. first, like in normals , the voluntary key press was advanced in time towards the tone presentation, but this advance was significantly greater than in the case of controls. Secondly, the subjective auditory tone was sort of anticipated and shifted back in time towards the voluntary key press in schizophrenics. Thus, in schizophrenics, it seemed to them that the auditory tone had occurred prior to when it was actually presented. This lead to overall very significant reduction in subjective time experienced between the voluntary key press and the tone hearing, thus binding the two events strongly and leading to stronger agency inferred. to quantize the things a bit, in normal controls the voluntary key press was on the average occurring 26 ms from the actual key press, the auditory tone was heard 5 ms from the actual presentation and thus the subjective difference between the key press (intention) and tone (consequence) was 250-(26+5)= 239 ms. In schizophrenics, the key press was deemed to occur 60 ms after the actual key press, however most importantly the tone was not heard subjectively after its presentation, but was heard anticipatory 139 ms before its actual presentation, thus the actual perceived subjective time between the key press (intention) and the tone (consequence) was 250-60-139 = 51 ms only. Now , one can easily see, that if perceived subjective time between tow events is shortened in schizophrenia, then wont they end up falsely clubbing many coincidental things too together, because they seem to follow each other in close temporal proximity.

To appreciate the results, one needs to put these results in the broader context of what we know about agency in schizophrenics:

Previous laboratory studies have investigated agency using action attribution tasks. In these tasks, the patient is asked to perform an action, and is shown a visual image corresponding to that action, for example, a line drawn with a pen , a video of a hand making a manual posture , or a computerised image of a joystick moving. By introducing a mismatch between the performed action and the visual feedback, experimenters investigate the accuracy of attribution judgments. The subject has to attribute the viewed image either to an action he has just been instructed to make or to some other source. Interestingly, all these studies have found schizophrenics abnormally willing to attribute to themselves actions which in fact differ from the ones they performed. Thus, they are less sensitive than control subjects to spatial, temporal or kinematic mismatches between actions and visual feedback. The direction of these results points towards an excessive, rather than a reduced, sense of agency. Such results have been interpreted in the context of an internal forward model. Schizophrenic patients’ errors involve mostly over-attribution, implying a forward model with an unusually tolerant comparator.

Impaired judgement of agency can also be linked to the brain abnormalities underlying the disease. Agency involves forming a conscious mental association between one’s own intentional actions, and their consequences in the outside world. Thus, agency may be a conscious aspect of a more general system for instrumental or operant learning about environmental contingencies and rewards. Animal learning studies show that dopaminergic circuits, including the basal ganglia and medial forebrain are essential for associating actions with their effects, and for motivating behaviours. Brain imaging studies in man show that these same areas are active when a voluntary action produces a reward or other salient consequence . Moreover, these dopaminergic circuits are overactive in schizophrenia . Excessive dopaminergic activity might therefore explain abnormalities of conscious agency in schizophrenia, such as over-association between intentions and external events.

This is how they interpret their results:

More importantly, our schizophrenic patients seem to show an exaggerated version of the normal binding effect, or hyperbinding. These results could account for the findings of some action attribution experiments. Franck et al. asked patients and controls to move a joystick and then to observe their movements on a computer screen after a delay. The experimenters systematically varied the delay to investigate at what point the two groups ceased to accept the observed action as their own. Control subjects detected the temporal discrepancy between their action and the image with delays of around 100–150 ms. Schizophrenic subjects were much more tolerant, and accepted the viewed action as their own even for delays of 300 ms. Overall, the detection threshold for the relevant action was increased by about 150–200 ms for the patients compared to the controls. This value can be compared to the 180 ms difference between our patients and controls in the implied perceptual duration of the interval between action and tone.

The direction of the attribution effect is important: schizophrenics over-attributed events to their own agency. Our data suggests that schizophrenic patients have unusually strong associations between conscious representations of action and consequence. Thus, they might bind action and viewed image across the substantial delay periods imposed in the Franck et al. experiment, and be unaware of the artificially-induced lag between these events. There may be a critical period in which to perceive the consequence of an action. Actions and events falling in this period may be perceptually bound. A deficit in setting the duration of this critical period in schizophrenics could contribute to the shifts we found in their subjective temporal experience. This view would interpret abnormal conscious experience in schizophrenia as a problem in predicting the consequences of one’s own actions. Further work could investigate whether temporal analysis in schizophrenic patients is defective only when concerning their own actions, or also when observing actions made by others.

I am thrilled as usual and predict that if the same experimental paradigm is used with Autistic, then they will show very little or no forward movement of subjective time between their actual voluntary key-press and the subjective feel of when they decided to press the key. Also, there would be no anticipatory backwards movement of subjective time for when the tone was heard. Thus, Autistic would perceive the time gap as 250 ms only, or may even perceive the time to be more than 250 ms depending ion whether they move the voluntary key press subjective time back in time. No matter what they should show lesser binding between the intention (if they can form one) and consequence.
Haggard P, Martin F, Taylor-Clarke M, Jeannerod M, Franck N. (2003). Awareness of action in schizophrenia Neuroreport, 14 (7), 1081-1085

Sphere: Related Content