Thursday, March 29, 2007

Yellow - Off, Blue - On: A new neural code that could help epileptics

While there has been some work on a light therapy for Bipolar disorder, and specifically limiting exposure to blue light to keep the circadian clock in check; this time the effects of blue and yellow pulses of light , deep inside the brain, as investigated by an MIT team, have resulted in a promising treatment for epilepsy.

Epilepsy, as we all know , is caused by excitation of neurons at a focal point and the subsequent spreading of that activation, so that all or a majority of neurons get excited at the same time. The normal treatment , in case the epileptic fits become life threatening, is neurosurgery, or removing the brain area around the focal point.

This research has focused on the effects of yellow pulses of light, inside the brain , on neurons engineered to express the halorhodopsin gene, a gene that responds to yellow light by opening the chloride ion channels. This results in a hyperpolarisation of the cell and thus ensures that the neuron doesn't fire easily. I believe they have performed the experiments in laboratory cultures (in vitro) and plan to replicate on transgenic mice containing this gene, so it is a long haul from here to some actual treatment options for epileptics. Still the possible applications are fascinating:

Many epilepsy patients have implanted electrodes that periodically give their brains an electric jolt, acting as a defibrillator to shut down overactive neurons. This new research opens up the possibility of an optical implant that could do the same thing, using light instead of electricity. The Media Lab neuroengineering group plans to start studying such devices in transgenic mice this year.


Thus we have a radically new treatment option for epilepsy. It is also pertinent to note that the same group had earlier identified a mechanism whereby blue pulses of light could lead to excitation of the brain. thus, with appropriate implants in the brain, one can , using light, control the excitation and inhibition of neuronal circuits. What advantages these offer over traditional electrode implants needs to be seen.

The group also plans to use the new method to study neural circuits. Last year, Boyden devised a technique to stimulate neurons by shining blue light on them, so with blue and yellow light the researchers can now exert exquisite control over the stimulation and inhibition of individual neurons.


Lets hope they succeed in their efforts, not only to help epileptics with non-surgical treatments, but also to more deeply 'see' the neural circuits and the neural codes.


Tags:

Sphere: Related Content

Beware of the mouse who knows when it is in a trap (or den or nest or bed)

We all know that even a mouse is a tiger in its own den. But for that mouse to become a tiger it must have an awareness of when it is in a den and when it is'nt. Till now, knowledge of abstract concepts like a den or a nest or a bed or a mouse-trap were limited to humans and higher primates. Mice, being such lowly creatures, were not supposed to have abstract concepts and though they may remember a particular den or nest as their own, when placed in a new nest they would supposedly be not aware that the enclosure/ furniture can serve as a nest. To put things simply, they were not supposed to identify objects based on their functionality. If one changed the shape or size of the nest, or the construction material, then they were supposed to get confused and would have not able to still identify the object as a nest or a bed.

All that has become history now, with a new study (pdf) that clearly demonstrates that the mice have abstract concepts in their mind and that specific neurons in the hippocampal area fire when the mouse is in a bed or is entering / exploring one. We already know that we have place cells in hippocampus that fire when a mouse is in a particular location in space and that these are tied to episodic memory. Hippocampus has also been involved in learning mechanisms and it is only appropriate that we discover concept cells in hippocampus that fire when different concepts like bed/ nest/ trap are encountered. And of course we also know of Halle Barre neurons in fusiform gyrus that fire when viewing a particular face.

In this study,the authors found that there were three kind of cells in the CA1 region of the hippocampus, that had distinct firing patterns related to the concept of nests. Whenever a mouse encountered a nest , the transient-on class would increase their rate of firing. If the mouse was not facing the nest, then these cells would not fire; only when the mouse was facing and about to enter the nest were these neurons firing. The second group of neurons were the persistent-on ones, which would fire at a very high rate once the mouse was in the nest and would continue to do so till the time the mouse left the nest. The third type were juts the opposite of these - the persistent- off ones that ceased their normal firing rate, once the mouse entered the nest. Perhaps the rat have not only a concept of nest , but also of not-nest. The base firing of the persistent-off neurons may be a signaling mechanism within the mice brains to indicate that the mice is not in a potentially homely place.

You can read more about the study at the Neurophilosopher (and there you'll also find a great video of a mouse in the study exploring the nest, along with firing neurons) , but what I am amazed is that inst it logical then that the mice also have a concept of a trap - a potentially dangerous enclosure. And whatever variations we may make, isn't it evident that just like we can recognize a trap in its various guises, the mice too can. Making them of wood or metal or of different sizes should not matter. Thus, if the mouse has been exposed to a trap once, there is no use trying to lure it in trap of different type? But maybe I'm just being pessimistic; maybe the lesson we can draw from this study is to make the traps similar to their 'nests', so that they are comfortable and eager to enter in the trap. At long last a study that leads to better mouse traps!


Sphere: Related Content

Wednesday, March 28, 2007

Depression, Neurogenesis and Spatial navigation

We all know that hippocampus is the seat of both memory as well as spatial abilities (cognitive map theory). We also know that most of the neurogeneisis in adult humans happens in hippocmapus. We also know that depression is caused by stress and both stress and depression lead to or are correlated with reduced neurogeneisis in the hippocmapus (my learning helplessness theory of depression) .

Now a new study has found that depressed people have impaired spatial navigation abilities. Putting 2 and 2 together it is highly plausible that this relationship between depression and impaired spatial navigation is mediated by the reduced neurogeneies or atrophy in hippocampus.

Relatedly, a good article (pdf) regarding how new anti-depressants are targeting neurogenesis in hippocampus as a mechanism to alleviate depression.

Three cheers to the cognitive map theory- the focus with which this blog started!!

Hat Tip: BPS Research Digest

Sphere: Related Content

Simulating the future and remebering the past: Are we prediction machines?

This post is about an article by Schacter et al (pdf) regarding how the constructiveness of memories may crucially be due to the need to simulate future scenarios. But before I go to the main course, I would like to touch upon a starter: Jeff Hawkins Heirarchical Temporla Memory (HTM) hypothesis. I would recommend that you watch this excellent video.

As per Jeff Hawkins, we humans are basically prediction machines, constantly predicting the external causes and our responses to them. Traditionally, the behaviorist account has been that we are nothing but a bundle of associations- either conditioned pavlovian associations between stimuli and stimulus-response or a skinerrian association between our operant actions and environmental rewards. Thus every behavior we indulge in is guided by our memory of past associations and the impending stimulus. Jeff Hawkins refines this by postulating that we are not passive responders to environmental stimuli, but actively predict what future causes (stimuli) are expected and what our response to those stimuli may be. Thus in his HTM model, the memory of past events not only exerts influence via a bottom up process of responding to impending stimulus; but it is also used for a top-down expectation or prediction of incoming stimulus and our responses to it. Thus, we are also prediction machines constantly using our memory to predict future outcomes and our possible responses.

Now lets get back to the original Schacter article. Here is the abstract:

Episodic memory is widely conceived as a fundamentally constructive, rather than reproductive, process that is prone to various kinds of errors and illusions. With a view toward examining the functions served by a constructive episodic memory system, we consider recent neuropsychological and neuroimaging studies indicating that some types of memory distortions reflect the operation of adaptive processes. An important function of a constructive episodic memory is to allow individuals to simulate or imagine future episodes, happenings, and scenarios. Because the future is not an exact repetition of the past, simulation of future episodes requires a system that can draw on the past in a manner that flexibly extracts and re-combines elements of previous experiences. Consistent with this constructive episodic simulation hypothesis, we consider cognitive, neuropsychological, and neuroimaging evidence showing that there is considerable overlap in the psychological and neural processes involved in remembering the past and imagining the future.


As per the paper the same brain areas and mechanisms are involved in both remembering a past event and imagining a future one - and the regions involved include the hippocampus. These findings in itself are not so fascinating, but the argument that Schacter et al give for , as to why, the same regions are involved in both memory retrieval and future imaginings, and how this leads to confabulations and false recognitions is very fascinating. As per them , because we need to simulate the future events, and as the future events are never an exact replica of past events, hence we do not store the past events verbatim, but store a gist of the event, so that we can recombine the nebulous gist to create different possible future scenarios. Due to this fact (the need for simulation of future events), the memory is not perfect, and in normal individuals it is possible that they confabulate (attribute the source of their memory erroneously) or make false recognitions on memory tests like the DRM.

Fisrt a bit of background on DRM paradigm. In this test, a list of related words are presented to a subject: eg yawn, bed, night, pillow, dream, rest etc. All of these relate to the theme of sleep. Later in a recall test, when this thematically related word is presented to normal subjects, they most often say that they had encountered the word sleep earlier. However given an unrelated word like hunger, most are liable to recognize that the word was not encountered previously. What Schachter et al found was , that in those subjects that had damage to hippocampus/ other memory areas and were amnesics, this effect of confabulating the gist word was reduced. In other words, those with brain damage to memory areas were less likely to say that they had encountered the related word sleep during the original trial. this, despite their poor performance in overall remembering of old list items as compared to controls. This clearly indicates that remembering the gist vis-a-vis details is very important memory mechanism.

I believe that we should also take into account the prototype versus exemplar differences in categorization between the males and females into account here. I would be very interested to know whether the data collected showed the expected differences between males and females and hopefully the results are not confounded due to not taking this gender difference into account.

Anyway , returning to the experimental methodology, another sticking point seems to be the extending of results obtained with semantic memory (like that for word lists) to episodic memory.

Keeping that aside, the gist and false recognition data results clearly indicate that the constructive nature of memory is an adaptation (it is present in normal subjects) and is disrupted in amnesics/ people with dementia.

Thus, now that it is established that memory is reconstructive and that this reconstruction is adaptive, the question arises why it is reconstructive and not reproductive. To this Schacter answers that it is because the same brain mechanism used for reconstructing memory from gist are also used for imagining or simulating future scenario. They present ample neuropsychological, neuroimaging and cognitive evidence on this and I find that totally convincing.

The foregoing research not only provides insights into the constructive nature of episodic memory, but also provides some clues regarding the functional basis of constructive memory processes. Although memory errors such as false recognition may at first seem highly dysfunctional, especially given the havoc that memory distortions can wreak in real-world contexts (Loftus 1993; Schacter 2001), we have seen that they sometimes reflect the ability of a normally functioning memory system to store and retrieve general similarity or gist information, and that false recognition errors often recruit some of the same processes that support accurate memory decisions. Indeed, several researchers have argued that the memory errors involving forgetting or distortion serve an adaptive role.

However, future events are rarely, if ever, exact replicas of past events. Thus, a memory system that simply stored rote records of what happened in the past would not be well-suited to simulating future events, which will likely share some similarities with past events while differing in other respects. We think that a system built along the lines of the constructive principles that we and other have attributed to episodic memory is better suited to the job of simulating future happenings. Such a system can draw on elements of the past and retain the general sense or gist of what has happened. Critically, it can flexibly extract, recombine, and reassemble these elements in a way that allows us to simulate, imagine, or ‘pre-experience’ (Atance & O’Neill 2001) events that have never occurred previously in the exact form in which we imagine them. We will refer to this idea as the constructive episodic simulation hypothesis: the constructive nature of episodic memory is attributable, at least in part, to the role of the episodic system in allowing us to mentally simulate our personal futures.


I'll finally like to end with the conclusions the author drew:

In a thoughtful review that elucidates the relation between, and neural basis of, remembering the past and thinking about the future, Buckner and Carroll (2007) point out that neural regions that show common activation for past and future tasks closely resemble those that are activated during “theory of mind” tasks, where individuals simulate the mental states of other people (e.g., Saxe & Kanwisher 2003). Buckner and Carroll note that such findings suggest that the commonly activated regions may be specialized for, and engaged by, mental acts that require the projection of oneself in another time, place, or perspective”, resembling what Tulving (1985) referred to as autonoetic consciousness.


This Seems to be a very promising direction. The 'another time and place' can normally be simulated withing hippocampus that also specializes in cognitive maps. We may use the cognitive maps to not only remember past events, but also simulate new events. In this respect the importance of dreams may be paramount. Dreams (and asleep) may be the mechanism whose primary purpose is not memory consolidation; rather I suspect that the primary function of dreams is to work on the gist of the memory from the previous day, simulate possible future scenarios, and then keep in store those memories that would help and are likely to be encountered in future. Thus, while dreaming we are basically predicting future scenarios and sorting information as per their future relevance. Not a particularly path-breaking hypothesis, but I'm not aware of any thinking is this direction. Do let me know of any other similar hypothesis regarding the function of dream as predictors and not merely as consolidators.


Sphere: Related Content

Body Posture affecting memory recall

First Mixing Memory wrote about it; and now Dave at Cognitive Daily was enchanted with this study that shows that if one assumes a body posture during memory retrieval, which is the same as the body posture at the time of memory formation, then the recall is better. I do like this study, and I think it is important, but am hardly surprised or overwhelmed by the results.

To explain the study in a nutshell (you are encouraged to read about the study at Mixing Memory or Cognitive Daily), the authors found that juts like some smells, sights or sounds can trigger associated memories, so too can the body posture. Now, to m this doesn't come a s a surprise because I have always been fascinated by the three other senses that are normally ignored by those who claim we have juts five senses: sight, sound, touch (includes all somato-sensory senses like pain , temperature etc), smell and taste. The three other senses that are normally ignored are Vestibular sense (or the sense of balance), kinesthetic sense (or the sense of self movement) and the proprioception sense (the sense of body position and posture).

Evidently if memory encoding uses some sort of sensory inputs to encode a particular memory, it is clear that memories would be assorted with all of the sense modalities and a trigger in any of them that is strong enough, can trigger that memory recall.

One can test this for the other sense too - the vestibular and kinesthetic - and one would find that one can recall memories better if the same vestibular or kinesthetic conditions are invoked. Experimentally one can have people dance, put them in merry-go-rounds, put them atop an elephant, let them drive, let them go up and down in the lift and ask for congruent or incongruent memory recall. I wont be surprised if the same effects are observed with the kinesthetic sense too. Maybe one of you can make this your thesis and tell me the results, so I can blog about it later!

Tags: ,

Sphere: Related Content

Tuesday, March 27, 2007

Get onboard The Tech Link Train

I was tagged on the Tech Link Train by Alvaro at the SharpBrains blog- many thanks! This is a "link train" of science and technology blogs, so lets get straight down to business.

If you are "tagged":

1. Write a short paragraph at the beginning of your post and link back to the blog that put you on the list in the paragraph. This isn’t a suggestion. You need to break up the duplicate content string. Someone took the time to add you so the least you can do is give them an extra linkback.
2. Copy the list of originals below completely and add it to your blog. If you would like a different keyword for your blog then change it when you do your post and it should pass to most blogs with that keyword.
3. Take the additions from the blog that added you and place them in the “Originals” list.
4. Add no more than 5 new technology, science, or consumer electronics blogs to the list in the “My Additions” section.

My additions:

Mind Hacks

The Thinking Meat Project

BrainEthics

Cognitive Daily

Mixing Memory


The originals:
Intelligence Theories and Tests
The Mouse Trap
Brain-based Business
Future-making Serious Games
The Thinking Blog
SharpBrains Brain Fitness
Developing Intelligence
Brain Hammer
SCLin's Neuroscience blog
Pure Pedantry
The Corpus Callosum
Madam Fathom
Memoirs of a Postgrad
Peripersonal Space
The Phineas Gage Fan Club
Neurophilosophy
Healthoma
Neural Gourmet
bio::blogs
Dr.Katte’s Blog
Brain Blogger
DigitalPhocus
Alpesh Nakar
OneTipADay
The How To Geek
The TechZone
Mega TechNews
Tech Buzz
Techzi
Connected Internet
John Chow dot Com
Ted Leung on the Air
Geek is a chic
you’ve been HACKED
IDIOT TOYS
JMH Techtronics
Web Services
UtterlyGeek
Tech It Like A Man!
Ugh!!’s Greymatter Honeypot
techboyardee
The Tech Inspector
Smart Machines
Kuiper Cliff
businessbytesgenesmolecules
MindBlog

Tags:

Sphere: Related Content

Encephalon #19 is online now!

Encephalon #19 is now online at the Peripersonal Space. This time it is themed around the misperceived dichotomy between reason and emotion and is served with a handsome and generous dressing of quotes. Rush over and fulfill your appetite.

Sphere: Related Content

Thursday, March 22, 2007

Brave Heart: does will power reside in heart?

I have written earlier regarding the Heart Rate Variability, that is primarily caused by the Autonomous Nervous System (the opposite effects of PNS and SNS), and how a flexible HRV is related to better response to stress and reduced anxiety in face of external stressors. While looking at the evidence and linkages between HRV and emotional regulation, I had also speculated in it that a lower baseline or resting HRV may be reflective of depression and low regulation/motivation; while a high resting or baseline HRV reflective of Mania and high regulation/ motivation.

A recent study has looked into the issue of whether cognitive self -regulation (will power / motivation) is also associated with HRV. The study reported that higher baseline HRV was associated with more will-power and ability to resist temptation. Also, as they had surmised that will-power is a limited resource and hence the ability to resist temptation must exhaust the will- power ability, hence if the subjects showed higher HRV during the resisting temptation phase, then they would have exhausted their will-power reserves and would not persist in subsequent demanding tasks and this is exactly what they found.

The study consisted of measuring HRV, while the subjects were given a choice of eating cookies/candies or carrots. those who chose carrots over candies (thus exhibiting more will-power to resist the temptation of candies) also showed higher HRV.

In the second experiment, after the subjects chose candy or carrot , and hence supposedly exhausted their limited will-power cognitive reserves, they were asked to do a tough anagram exercise. Those who had chosen carrots were more likely to give up the task earlier. Yet those with higher baseline HRV showed high motivation and will -power regardless of whether they chose candies or not.

This I believe is a good corroborator of Higher resting HRV to be related to better self-regulation and mania , while lower baseline HRV to be related with depression and poor self-regulation. So maybe our hearts do tell us a lot about ourselves, our abilities to resist temptations and our will -powers.

Sphere: Related Content

Wednesday, March 21, 2007

Welcome Boing Boing readers!

It seems I have been Boing Boinged! Thanks to Mark for linking to one of the posts, there has been a dramatic increase in viewership. Welcome abroad the Boing Boing readers and I hope that while you are here you would like to check the other popular articles on the left sidebar to get a falavor of The Mouse Trap. Dont forget to subscribe or visit later if you find some artciles to your taste!

Sphere: Related Content

Tick, Tick, Tock: The Mouse without the Clock

In a study that could have potentially far-reaching effects for the Bipolar research and treatment, Dr Colleen and her group have reported on a mouse model of bipolar disorder.

The association between circadian rhythms and bipolarity is well established and a bipolar episode is characterized with disruptions in daily sleeping, eating rhythms etc. Till now the biological basis of this was not clear.

In this study, mice with Clock gene knocked out were tested on a number of measures of bipolairty and it was found that these mice lacking the Clock gene, which is essential for proper circadian rhythms, suffered from human manic like symptoms. Moreover treating these bipolar mice with lithium resulted in the subsiding of symptoms.

The study included putting the mutant mice through a series of tests, during which they displayed hyperactivity, decreased sleep, decreased anxiety levels, a greater willingness to engage in "risky" activities, lower levels of depression-like behavior and increased sensitivity to the rewarding effects of substances such as cocaine and sugar.

"These behaviors correlate with the sense of euphoria and mania that bipolar patients experience," said Dr. McClung. "In addition, there is a very high co-morbidity between drug usage and bipolar disorder, especially when patients are in the manic state."

During the study, lithium was given to the mutant mice. Lithium, a mood-stabilizing medication, is most commonly used in humans to treat bipolar patients. Once treated with the drug on a regular basis, the majority of the study's mice reverted back to normal behavioral patterns, as do humans.

The clock gene is expressed widely in the human brain, but the focus till now was only on the area called suprachiasmatic nucleus. In this study the area of brain associated with reward learning, VTA/ Striatum etc was studied and expressing the clock gene there in KO mice resulted in subsiding of symptoms.

The researchers also injected a functional Clock gene protein – basically giving the mice their Clock gene back – into a specific region of the brain that controls reward functions and where dopamine cells are located. Dopamine is a neurotransmitter associated with the "pleasure system" of the brain and is released by naturally rewarding experiences such as food, sex and the use of certain drugs. This also resulted in the mice going back to normal behaviors.


This is an exciting news as it makes a mice model for Bipolairty readily available and would help in clinical testing of new anti psychotics and mood stabilizers.


Tags: ,

Sphere: Related Content

Tuesday, March 20, 2007

Creatures of Circumstances

There is an intriguing article at BPS Research Digest regarding a study which has reported on a "Chameleon man". No this is not a super hero like spider man or super man, but a 65 yr old AD, who after a stroke has acquired a capacity to assume any social role that his circumstances and situations demand.

According to the report, when in presence of doctors and in a hospital setting , he would assume the role of a doctor, with ad hoc explanations about how he came to become a doctor.

When with doctors, AD assumes the role of a doctor; when with psychologists he says he is a psychologist; at the solicitors he claims to be a solicitor. AD doesn't just make these claims, he actually plays the roles and provides plausible stories for how he came to be in these roles.

To investigate further, Giovannina Conchiglia and colleagues used actors to contrive different scenarios. At a bar, an actor asked AD for a cocktail, prompting him to immediately fulfil the role of bar-tender, claiming that he was on a two-week trial hoping to gain a permanent position. Taken to the hospital kitchen for 40 minutes, AD quickly assumed the role of head chef, and claimed responsibility for preparing special menus for diabetic patients. He maintains these roles until the situation changes. However, he didn't adopt the role of laundry worker at the hospital laundry, perhaps because it was too far out of keeping with his real-life career as a politician.

It is surmised that this is due to loss of dis inhibition. Ad also suffers from Anterograde Amnesia - which means he cannot form new long term memories after the date of the stroke- though his previous memories remain intact.

Now, this case is very interesting, because it throws light on the power of situation and about social role-playing. We have already discussed the personality versus situation debates earlier and this adds more fuel to the fire. If it is true, that to some extent, we are all prone to assuming social roles when present in the appropriate social context; then it seems reasonable that for those of us, who have some defects in this disinhibition, they would be more prone to succumb to the powers of the situation. An extreme case would be the extremely hypnotizable subject who assumes the social role playing very easily, on gentle nudging by the hypnotists. The hypnotist may somehow temporarily shut of this disinhibition circuit, and thus make the subject assume socila roles, as it seems assuming appropriate social rules is inbuilt.

To me this seems very fascinating, and if anyone can provide more details on this case study, or some related literature regarding social role assuming when in appropriate contexts, then I would be very grateful. Till then , it is a sobering though that not only in broader contexts, but even in day-to-day contexts we are all creatures of circumstances.


Sphere: Related Content

The origins of Morality

There is a decent article in NYT that explores the work of Dr. Frans De Waal and his assertion that the root of human morality is grounded in the sociality exhibited by the primates. His contention is that animals (esp Apes and Monkeys) show emotions and empathy; as well as for the evolution of co-operative behavior many other factors underlying Morality- like reciprocity and peace-making evolved and these in turn set up the stage for the evolution of Human morality.

Social living requires empathy, which is especially evident in chimpanzees, as well as ways of bringing internal hostilities to an end. Every species of ape and monkey has its own protocol for reconciliation after fights, Dr. de Waal has found. If two males fail to make up, female chimpanzees will often bring the rivals together, as if sensing that discord makes their community worse off and more vulnerable to attack by neighbors. Or they will head off a fight by taking stones out of the males’ hands.

Macaques and chimpanzees have a sense of social order and rules of expected behavior, mostly to do with the hierarchical natures of their societies, in which each member knows its own place. Young rhesus monkeys learn quickly how to behave, and occasionally get a finger or toe bitten off as punishment. Other primates also have a sense of reciprocity and fairness. They remember who did them favors and who did them wrong. Chimps are more likely to share food with those who have groomed them. Capuchin monkeys show their displeasure if given a smaller reward than a partner receives for performing the same task, like a piece of cucumber instead of a grape.

These four kinds of behavior — empathy, the ability to learn and follow social rules, reciprocity and peacemaking — are the basis of sociality

While it is still contentious as to what extent Morality is inbuilt (genetic and one of the human universals) versus it develops under the influence of society and is culturally determined; the actual case, like everything else, may lie in between in terms of a developmentally unfolding of inherent potentiality and with various nuances as per the culture of flowering. Here Kohlberg's developmental framework would seem relevant- but that framework is too much Kantian in the sense that it emphasizes, and is based on, rational reasoning. The reality may however be as more Humean and as per De Waal and Hauser, whereby most moral decisions are more intuitively guided, with post-hoc reasoning following the initial emotional decision.

But biologists like Dr. de Waal believe reason is generally brought to bear only after a moral decision has been reached. They argue that morality evolved at a time when people lived in small foraging societies and often had to make instant life-or-death decisions, with no time for conscious evaluation of moral choices. The reasoning came afterward as a post hoc justification. “Human behavior derives above all from fast, automated, emotional judgments, and only secondarily from slower conscious processes,” Dr. de Waal writes.


I, of course am most sympathetic to the developmental framework and hope that someone would take up Kholberg's framework and incorporate emotions and emotional intelligence in it.

Sphere: Related Content

Saturday, March 17, 2007

Psychology of security

This is an FYI post about a great article by Bruce Schneier, assessing the psychological issues involved in assessing various security trade-offs. He touches on all aspects of behavioral finance,psychological biases, prospect theory, decision-making etc that are relevant and affect our felling of security vis-a-vis actual and objective security. Although, he is not that strong when it comes to discussing the neurological basis of these, I would highly recommended reading the article in its entirety!

Sphere: Related Content

Wednesday, March 14, 2007

Encephalon #18 is online now!

Check out the brand new edition of Encephalon at Pharyngula. My favorites include the Musical harmony as grammar studies. With my obsession of extending grammar to Universal Moral Grammar or Universal Spiritual Grammar; this new direction of Universal Musical Grammar seems very attractive and feasible. Chris has already presented evidence that Musical Melody serves the function of Semantics; so we are left with Morphology and Pragmatics as the two remaining broad domains of language that need to be mapped to the musical concepts like Rhythm and Dynamics.

Sphere: Related Content

Monday, March 12, 2007

The situational factors: compliance, personality and charachter

I've recently come across a new blog the Situationist and have just read a three part article by the famed Phillip Zimbardo (who has conducted the Stanford prison experiments) titled Situational sources of evil.

In the part I, he discusses Stanley Milgram's compliance experiments wherein under the authority of a professor, subjects were forced to apply outrageous electric shocks to the confederates. This experiment was a classical one in social psychology and showed how under the situations of authority, normal individuals can be made to do evil deeds in the laboratory. Milgram also did a number of variations of this experiment to find out what factors facilitated compliance and which factors enabled resistance to authority.

In Part II, Zimbardo discusses how these laboratory results can be extended to the real world phenomenons like the holocaust/ palestentain suicide bombers/ suicide cults and how most of the perpetrators are very common people (banality of evil).

In part III, Zimbardo outlines 10 learnings from Milgram's experiments and I find then worth summarizing here -

Compliance can be increased by :

  1. A pseudo-legal contract that binds one to the act (which may not be construed as evil, a priori, but becomes evil while actual execution). also the public declarations of commitment force cognitive dissonance and make people stick to their 'contracts'.
  2. Meaningful social roles like 'teacher' etc given to the perpetrators. They may find solace under the fact that their social role demands the unavoidable evil.
  3. Adherence to and sanctity of rules that were initially agreed upon. The rules may be subtly changed, but an emphasis on rule-based behavior would guarantee better compliance.
  4. Right framing of the issues concerned. Insteada of 'hurting the participant' framing it as 'improving the learners learning ability'. Regular readers will note how committed I personally am to the framing effects.
  5. Diffusion/ abdication of responsibility: Either enabling the responsibility for the evil act to be taken upon by a senior authority; or by having many non-rebelling peers diffuse responsibility similar to the bystander effect.
  6. Small evil acts initially to reduce the resistance to recruitment. Once into the fold, one may increase the atrocities demanded from the perpetrator.
  7. Gradual increase in the degree of the evil act. Sudden and large jumps in evilness of the acts are bound to be resisted more.
  8. Morphing the Authority from just and reasonable initially to unjust and unreasonable in the later parts.
  9. High exit costs. You cannot beat the system, so better join it! The system can beat you up, so better remain in it!! also, allow dissent or freedom of voice, but suppress freedom of action!!!
  10. An overarching lie or framework or 'cover story' that gives a positive spin to the evil acts (in good terms)..'this experiment would help humanity' , 'Jews are bad/inferior and need to be eliminated' etc.
Zimbardo is hopeful that by recognizing these factors that normally help in compliance to unjust and irrational authority, one can have the courage and acumen to resist such authority. The two traits he picks up are taking responsibility for one's own acts and asserting one's own authority.


The word character is normally frowned upon, and rarely used, in psychological discourses nowadays, but like Zimbardo I would like to highlight Eric Fromm's works like Escape from Freedom in this regard, which posit that one can overcome the natural tendency to escape from one's freedom and sense of responsibility and make a positive character or habitual behavioral tendencies that takes full responsibility for the self.

There is another related debate to which I would like to draw attention. Normally it is posited that we are composed of temperaments or personality traits ( the most famous being the Big Five or OCEAN traits) and much of our behavior is a result of our inherent tendencies.

A dissenting voice is of Walter Mischel , who claims that the concept of personality is vague and much of behavior is due to situational factors. I'm sure the truth is more towards a middle ground and like genes and environment, both personality and situations affect a behavioral outcome. Not stopping here I also see a role here for acquired propensities or habits or character that can overcome both the underlying propensities and the situational factors. Even after taking character into account our acts may not be totally non-deterministic or free or non-predictable, but could be free in a limited sense that we, ourselves, incorporated those habits/ character traits. We may still behave predictably, but that would not be due to our conditionings or situational factors; but because of an acquired character.


Sphere: Related Content

Friday, March 09, 2007

The courage of a mouse to say 'No': A case of metacognition or risk-aversion?

A recent article in Current Biology by Foote et al (courtsey Ars Technica) posits that rats have metacognition abilities. till now only Humans and primates were assumed to have metacognitive abilities. One feature or defining characteristic of metacognition is knowing what you know and also knowing what you don't know. It means one can think about one's own mental states and determine what knowledge one already has and what knowledge one has not yet learned. So a related ability would be the ability to decline a test of knowledge if one thinks that one has not learned enough to ace the test. For those who gave GRE/ any other exam recently and maybe postponed that exam, they would have no difficulty appreciating this that postponing/declining a test involves metacognition.

Taking this line of reasoning further, Foote et al surmise that if a rat could decline a test, under conditions when the rat was not sure of its learned knowledge regarding the test and doubted its ability to successfully complete the test, then such a declining behavior would indicate that the rat has metacognitive abilities. I find no flaws in this reasoning, but have a few quips about their particular experimental setup, which may have confounded the results by not factoring in the risk aversion.

First regarding their hypothesis of the experiment:

Here, we demonstrate for the first time that rats are capable of metacognition—i.e., they know when they do not know the answer in a duration-discrimination test. Before taking the duration test, rats were given the opportunity to decline the test. On other trials, they were not given the option to decline the test. Accurate performance on the duration test yielded a large reward, whereas inaccurate performance resulted in no reward. Declining a test yielded a small but guaranteed reward. If rats possess knowledge regarding whether they know the answer to the test, they would be expected to decline most frequently on difficult tests and show lowest accuracy on difficult tests that cannot be declined [4]. Our data provide evidence for both predictions and suggest that a nonprimate has knowledge of its own cognitive state.

Now on to the actual experimental setup:


Each trial consisted of three phases: study, choice, and test phases (Figure 1). In the study phase, a brief noise was presented for the subject to classify as short (2–3.62 s) or long (4.42–8 s). Stimuli with intermediate durations (e.g., 3.62 and 4.42 s) are most difficult to classify as short or long [11, 12]. By contrast, more widely spaced intervals (e.g., 2 and 8 s) are easiest to classify. In the choice phase, the rat was sometimes presented with two response options, signaled by the illumination of two nose-poke apertures. On these choice-test trials, a response in one of these apertures (referred to as a take-the-test response) led to the insertion of two response levers in the subsequent test phase; one lever was designated as the correct response after a short noise, and the other lever was designated as the correct response after a long noise. The other aperture (referred to as the decline-the-test response) led to the omission of the duration test. On other trials in the choice phase, the rat was presented with only one response option; on these forced-test trials, the rat was required to select the aperture that led to the duration test (i.e., the option to decline the test was not available), and this was followed by the duration test. In the test phase, a correct lever press with respect to the duration discrimination produced a large reward of six pellets; an incorrect lever press produced no reward. A decline response (provided that this option was, indeed, available) led to a guaranteed but smaller reward of three pellets.

The test they have used is a stimulus discrimination test. Their results indicated that indeed the rats declined more often on difficult trials (trials in which the stimulus were closely spaced around the men of 4s) as compared to easy trial (in which they had to discriminate widely spaced stimulus (say 2s and 8s). This neatly demonstrates that the rats were internally calculating what their odds of passing the test were, and in case of the difficult test they took the better option of choosing the decline-the-test condition. However I would like to see more of their data and factor out the effcets of risk aversion.

We all know that humans are prone to risk aversion. That is if I present to you an option of choosing a sure amount of 100 rs or a 50% chance of winning 200rs , you would normally choose the fist option, though if one compares the utility function it is the same. In first case you have and expected value of 100 and in the second case too you have an expected value of 100 (0.5*0 +0.5*200). Thus it doesnt make much sense why one would use one over the other. This becomes more interseting when we increase the amount of the risky option. suppose we now have 100 rs assured vis-avis a 50 % chance of 300 rs still , most of us end up choosing the assured sum.

In this setup the utility of declining the test is 3 pellets; while if we assume that the rats have not learned how to discriminate the stimuli; then assuming that they press the levers at random and thus each option of the test condition is equally probable we have the utility as 0.5 *0 +0.5 *6 = 3 pellets. so we have the same situations as with humans. Now taking risk aversion into account, one would find that the rats would decline the test more often in the difficult stimulus conditions as that is a safe and assured option as compared to the take-the-test condition. As a matter of fact I am surprised that there were some rats who did choose the take-the-test condition. I guess men are more meek than mice!!

So the best thing to do would be to take risk-aversion into account and then after factoring it out decide on whether the rats knew (in a conscious sense) that the test is difficult. Risk aversion is mostly sub-conscious and would not involve metacognition. However, the trend of rising declining behaviors with test difficulty does point to the fact that the rats did have some metacognition.

I would love to have this study replicated using a maze (mouse trap sort of) task. In a amze the cognitive map of the maze provides a good indicataor of how much the mice know about the test/ test difficulty and measuring the declining in this case may be directly related to their meta-cognitive abilities.


Sphere: Related Content

Thursday, March 08, 2007

Religion continued: Throwing the baby with the bathwater?

I recently came across the work of Norenzayen et al regarding the linkage between religion and tolerance (courtesy Mixing Memory) and found some surprising commonalities with the views I have espoused earlier.

For one they talk about the need for religion and accept it as a human universal. They also note some aspects of the religious belief that are universal.

Anthropologically-speaking, there is a near universality of 1) belief in supernatural agents who 2) relieve existential anxieties such as death and deception, but 3) demand passionate and self-sacrificing social commitments, which are 4) validated through emotional ritual (Atran & Norenzayan, 2004). There are salient similarities to be found between even the most radically divergent cultures and religions (Norenzayan & Heine, 2005).

One can see that these have clear parallels with the autism/ schizophrenic differences on four dimensions I highlighted yesterday. Specifically:

  1. Agency: belief in supernatural agency in religious people
  2. Meaning: religious beliefs give meaning and relive existential anxieties like those of death (Terror Management theory)
  3. Causal and Magical thinking: leading to rituals, and pro-social behaviors in the religious people
  4. Experience: An emotional and ecstatic experience of oneness with others in the devotees and mediators.
The book chapter goes on to describe the two aspects of religiosity: a subjective/natural one and a objective-coalitional one. To put in simple words, one is belief in a personal , felt and experienced god (combining 1 and 4 above) and the other is the traditional scripture and culture driven coalitional religion that binds people together and provides them wioth a sense of meaning and purpose (combining 2 and 3 above).

For centuries, those who have attempted to explain religion (and even those who have propagated certain religions) have often distinguished two aspects of religion, treating them not only as distinct but also as opposites.

Dual understandings of religion generally consider a sense of the omnipresence of the divine (whether sensed directly and spontaneously or with the aid of prayer, meditation or drug-ingestion) more subjective/natural than it is socially transmitted/cultural.

Some illustrative examples are: James’ (1982/1902) distinction between the “babbling brook” from which all religions originate (p. 337) and the “dull habit” of “second hand” religion “communicated … by tradition” (p. 6) as well as that between “religion proper” and corporate and dogmatic dominion (p. 337); Freud’s (1930/1961) distinction between the “oceanic feeling” as an unconscious memory of the mother’s womb and “religion” as acceptance of religious authority and morality as a projection of the father; Weber’s (1947, 1978) distinction between religious charisma in its basic and “routinized” forms; Adorno’s distinction between “personally experienced belief” and “neutralized religion” (Adorno, Frenkel-Brunswick, Levinson, & Sanford, 1950); Rappaport’s (1979) distinction between the “numinous”—the experience of pure being--and the “sacred” or doctrinal; and, more recently, Sperber’s (1996) cognitive distinction between “intuitive” beliefs—“the product of spontaneous and unconscious perceptual and inferential process” (89), and “reflective” beliefs “believed in virtue of other second-order beliefs about them.”

The authors then go on to synthesize material on tolerance- religiosity linkages and explain how the subjective-natural religiosity is inversely related to intolerance while the coalitional- objective religiosity is directly related to intolerance and co-occurs with intolerance and prejudice. A note of caution though, the authors do not consider the two dimensions of religion independent, but find a positive correlation between the two.

The measures we are most concerned with are those tapping religious devotion, rooted in supernatural belief, and coalitional religiosity, rooted in the costly commitment to a community of believers—a community that is morally and epistemically elevated above other communities. Religious devotion centers on the awareness of and attention to God or the “divine” broadly conceived.

Coalitional religiosity, on the other hand, should be approximated by validated scales measuring what social psychologists consider coalitional boundary-setting social tendencies, such as authoritarianism, fundamentalism, dogmatism and related constructs (e.g., Kirkpatrick 1999).

The authors then go on to explain coalitional religiosity in terms of sexual selection and costly signalling, instead of group selection as we had discussed yesterday.

Coaltional religiosity is likely rooted in the costly sacrifice to the community of believers that is the hallmark of religion. As evolutionary theorists have noted, sacrificial displays can be selected for if carriers of honest signals of group membership are more likely to be reciprocated by a community of cooperators. Even in rights-oriented “individualist” cultures, one is expected to sacrifice all selfish gains that might accrue from being on the benefiting end of injustice towards others. Atran (2002) and others (Atran & Norenzayan, 2004; Sosis & Alcorta, 2003) note that sincere expressions of willingness to make any kind of sacrifice (including the potential ultimate sacrifice of one’s own life) only occasionally necessitate actually following through on that sacrifice in a way that has long term costs to the potential for survival and reproduction of the genes carried by that individual. However, the material and social support benefits that can accrue to those who sincerely express or demonstrate such willingness are both more likely to occur and are of more obvious value to the long term survival of one’s genes—unless one is among the unlucky individuals whose sincere demonstration involves actually dying before reproductive potential is maximized (and even then, socially-given benefits to close kin may offset the genetic loss of one individual). This “adaptive sacrifice display” explanation for religious devotion is related to the evolutionary concept of “costly signaling”, a process that explains many forms of sacrificial displays in the animal kingdom, for example, why male peacocks who burden themselves with more costly plumage may nevertheless be more likely to pass on their genes, by increasing their chances of mating with a receptive female. Costly signaling theory offers an explanation of why humans engage in altruistic displays such as sacrifice and ritual without treating the group as a unit of selection (Sosis & Alcorta, 2003).

While I disagree with the above explanations for coalitional religiosity, I still believe that it works primarily to ensure altruism/ pro-social behavior and to manage existential anxieties. the evolutionary rationale for subjective or intrinsic religiosity (spirituality) is much more problematic. The authors believe it is selected as it enables us to empathize and to become transcendent to group boundaries.

That coalitional religiosity encourages intolerance towards outgroups seems obvious. But it is less clear why devotional religiosity can, under some conditions, foster tolerance. Some evidence from neuroscience may help us with a novel speculation as to the process by which devotional experience may lead to transcendence of group boundaries.Some investigations (e.g. Holmes, 2001; d’Aquili & Newberg, 1998, 1999; Newberg, d’Aquili & Rause, 2001) have found that when people are subjectively experiencing a transcendent or supernatural-oriented state, there is often decreased activity in the parietal lobe or other object association areas, where perceptions that distinguish self from non-self are processed.....These areas may play a role in any relationship prayer might have to greater tolerance, empathy or other-concern, since they all seem potentially relevant to whether sense of self is experienced in a more limited or more expansive way. Perhaps commonplace empathic experiences of seeing oneself in another or caring for another as one would care for oneself have some family relationship to rarer mystical experiences of ”oneness” and even to more extreme cases where the self-other boundary melts down completely.

They finally get to why transcendence is needed or what function it serves.

Coalitional religiosity arguably reflects a limited kind of self-transcendence that simply upgrades individual selfishness to group selfishness, sometimes with dramatically violent consequences. Yet religious devotion’s independent relationship to tolerance suggests that religion has the potential to transcend group selfishness as well. It is almost as if a more limited religious transcendence is in tension with a more thoroughgoing transcendence. What lies beyond group selfishness we may dub “God-selfishness,” a focus of oneself on a God or divine being or principle that is transcendent of all individuals and groups, including oneself and one’s own groups. God-selfishness would appear to be what religious devotion measures tap into when the variance of coalitional religiosity is controlled for. To the extent that this broader transcendence of self often manifests itself as a tolerant sense of kinship with all, then it would appear to render Dawkins’ pessimism about religion unwarranted.


With that note I'll end the post and explore the readers not to throw the baby with the bath water, when it comes to religion/ spirituality.


Sphere: Related Content

Wednesday, March 07, 2007

Science and Religion revisited: a case for a universal spiritual grammar?

I have earlier commented on how Science and Religions may be alternate frames via which we try to make sense of our lives and the world and how autistic thinking may be more related to scientific leanings; while a schizophrenic thinking style more prone to religiosity/ spirituality. I have also commented recently how one may view mind as composed of Agency and Experience; while a brain as composed of no agency/ experience; thus again bifurcating our concepts about self along religious/ scientific lines. One may add to this too much causal reasoning about the world as opposed to belief in randomness; and extend this to the earlier observations on Autistic and Schizophrenic thinking styles:

To recap:

  • Autistic/ scientific thinking style attributes too less of agency or intentionally (to even fellow human beings), while a schizophrenic/spiritual thinking style attributes too much agency (to even non-living things).
  • Autistic thinking is more correlation-is-not-causation type and makes sense of the world via statistical inferences and reasoning (I am not using probabilistic reasoning by purpose as there is difference between a probabilistic reasoning based on understanding of events involved (say the fact that a die has six faces and it is equally probable that it lends on either face ) versus a reasoning based on just number crunching on past data set (say given the outcomes of a number of such die throws statistically calculating the chances of the next throw value)); while the schizophrenic thinking style is more jumping to conclusions and more of causal (cause-and-effect) type of reasoning.
  • Autistic thinking is more of attributing no-experience-of-feelings-beliefs etc to fellow humans mind blindness), while a schizophrenic style is marked by a felling that one can intuit thoughts, feelings of others and a converse belief that one;s thoughts, feeling are being broadcast etc that is belief in too much of experience by self as well as others. The theory of mind or mirror neurons may go on overdrive in a psychotic episode.
  • Autistic thinking being more realistic/ literal; while schizophrenic thinking being more symbolic/ metaphorical. one could summarize this as too much meaning on one hand (there is meaning to life etc), while a nihilistic attitude on the other hand (evolution has no meaning and neither is evolution progress).
I read a recent NYT article by Robin Marantz Henig, that seems to nicely summarize the major arguments on why religion evolved and whether it is a spandrel or an adaptation. I would now like to quote from that article on one theory of how religion evolved as a spandrel:

Hardships of early human life favored the evolution of certain cognitive tools, among them the ability to infer the presence of organisms that might do harm, to come up with causal narratives for natural events and to recognize that other people have minds of their own with their own beliefs, desires and intentions. Psychologists call these tools, respectively, agent detection, causal reasoning and theory of mind.

These map very well to our Autistic/ scientific vs schizophrenic/religious/ artistic dichotomy. It is interesting to note that evolution itself decreed that we have capacities for agent detection, causal reasoning (though in a scientific sense we should have statistical or probabilistic Bayesian reasoning) and theory of mind capacities. Schizophrenic are the evolutionary cost for having these capacities. Later in the article it is also mentioned that making sense of death and life may be one region religion evolved. But first the importance of each of these abilities:

Agent detection evolved because assuming the presence of an agent — which is jargon for any creature with volitional, independent behavior — is more adaptive than assuming its absence. If you are a caveman on the savannah, you are better off presuming that the motion you detect out of the corner of your eye is an agent and something to run from, even if you are wrong. If it turns out to have been just the rustling of leaves, you are still alive; if what you took to be leaves rustling was really a hyena about to pounce, you are dead.

So if there is motion just out of our line of sight, we presume it is caused by an agent, an animal or person with the ability to move independently. This usually operates in one direction only; lots of people mistake a rock for a bear, but almost no one mistakes a bear for a rock.

What does this mean for belief in the supernatural? It means our brains are primed for it, ready to presume the presence of agents even when such presence confounds logic. “The most central concepts in religions are related to agents,” Justin Barrett, a psychologist, wrote in his 2004 summary of the byproduct theory, “Why Would Anyone Believe in God?” Religious agents are often supernatural, he wrote, “people with superpowers, statues that can answer requests or disembodied minds that can act on us and the world

A second mental module that primes us for religion is causal reasoning. The human brain has evolved the capacity to impose a narrative, complete with chronology and cause-and-effect logic, on whatever it encounters, no matter how apparently random. “We automatically, and often unconsciously, look for an explanation of why things happen to us,” Barrett wrote, “and ‘stuff just happens’ is no explanation. Gods, by virtue of their strange physical properties and their mysterious superpowers, make fine candidates for causes of many of these unusual events.” The ancient Greeks believed thunder was the sound of Zeus’s thunderbolt. Similarly, a contemporary woman whose cancer treatment works despite 10-to-1 odds might look for a story to explain her survival. It fits better with her causal-reasoning tool for her recovery to be a miracle, or a reward for prayer, than for it to be just a lucky roll of the dice.

A third cognitive trick is a kind of social intuition known as theory of mind. It’s an odd phrase for something so automatic, since the word “theory” suggests formality and self-consciousness. Other terms have been used for the same concept, like intentional stance and social cognition. One good alternative is the term Atran uses: folkpsychology.

Folkpsychology, as Atran and his colleagues see it, is essential to getting along in the contemporary world, just as it has been since prehistoric times. It allows us to anticipate the actions of others and to lead others to believe what we want them to believe; it is at the heart of everything from marriage to office politics to poker. People without this trait, like those with severe autism, are impaired, unable to imagine themselves in other people’s heads.

The process begins with positing the existence of minds, our own and others’, that we cannot see or feel. This leaves us open, almost instinctively, to belief in the separation of the body (the visible) and the mind (the invisible). If you can posit minds in other people that you cannot verify empirically, suggests Paul Bloom, a psychologist and the author of “Descartes’ Baby,” published in 2004, it is a short step to positing minds that do not have to be anchored to a body. And from there, he said, it is another short step to positing an immaterial soul and a transcendent God.

The adaptive advantage of folkpsychology is obvious. According to Atran, our ancestors needed it to survive their harsh environment, since folkpsychology allowed them to “rapidly and economically” distinguish good guys from bad guys.


This is an interesting line of argument and later in the article one also find mention of group selection and co-operation being important for evolution of religiosity.

The bottom line, according to byproduct theorists, is that children are born with a tendency to believe in omniscience, invisible minds, immaterial souls — and then they grow up in cultures that fill their minds, hard-wired for belief, with specifics. It is a little like language acquisition, Paul Bloom says, with the essential difference that language is a biological adaptation and religion, in his view, is not. We are born with an innate facility for language but the specific language we learn depends on the environment in which we are raised. In much the same way, he says, we are born with an innate tendency for belief, but the specifics of what we grow up believing — whether there is one God or many, whether the soul goes to heaven or occupies another animal after death — are culturally shaped.

This bodes for a more fuller post detailing and chalking the universal religious/ spiritual grammar, similar to the exercise I did for Universal moral grammar. Wait a little for that post!! For now on to the religion is an anti-dote to death anxiety argument:(which covers out fourth difference between Autistic and schizophrenic centered on meaning (or lack of it)):

Fear of death is an undercurrent of belief. The spirits of dead ancestors, ghosts, immortal deities, heaven and hell, the everlasting soul: the notion of spiritual existence after death is at the heart of almost every religion. According to some adaptationists, this is part of religion’s role, to help humans deal with the grim certainty of death. Believing in God and the afterlife, they say, is how we make sense of the brevity of our time on earth, how we give meaning to this brutish and short existence. Religion can offer solace to the bereaved and comfort to the frightened.

Now for the adaptionist arguments:

Intriguing as the spandrel logic might be, there is another way to think about the evolution of religion: that religion evolved because it offered survival advantages to our distant ancestors.

So trying to explain the adaptiveness of religion means looking for how it might have helped early humans survive and reproduce. As some adaptationists see it, this could have worked on two levels, individual and group. Religion made people feel better, less tormented by thoughts about death, more focused on the future, more willing to take care of themselves.


There is ample research that religious people are happier and live longer that atheists, so religion still has evolutionary advantages. It makes sense a being religious and believing in omnipotent god can contribute towards longevity in multiple ways: abstaining from alcohol/ tobacco and reducing death-anxiety being a few that come to mind.

Still, for all its controversial elements, the narrative Wilson devised about group selection and the evolution of religion is clear, perhaps a legacy of his novelist father. Begin, he says, with an imaginary flock of birds. Some birds serve as sentries, scanning the horizon for predators and calling out warnings. Having a sentry is good for the group but bad for the sentry, which is doubly harmed: by keeping watch, the sentry has less time to gather food, and by issuing a warning call, it is more likely to be spotted by the predator. So in the Darwinian struggle, the birds most likely to pass on their genes are the nonsentries. How, then, could the sentry gene survive for more than a generation or two?

To explain how a self-sacrificing gene can persist, Wilson looks to the level of the group. If there are 10 sentries in one group and none in the other, 3 or 4 of the sentries might be sacrificed. But the flock with sentries will probably outlast the flock that has no early-warning system, so the other 6 or 7 sentries will survive to pass on the genes. In other words, if the whole-group advantage outweighs the cost to any individual bird of being a sentry, then the sentry gene will prevail.

There are costs to any individual of being religious: the time and resources spent on rituals, the psychic energy devoted to following certain injunctions, the pain of some initiation rites. But in terms of intergroup struggle, according to Wilson, the costs can be outweighed by the benefits of being in a cohesive group that out-competes the others.

There is another element here too, unique to humans because it depends on language. A person’s behavior is observed not only by those in his immediate surroundings but also by anyone who can hear about it. There might be clear costs to taking on a role analogous to the sentry bird — a person who stands up to authority, for instance, risks losing his job, going to jail or getting beaten by the police — but in humans, these local costs might be outweighed by long-distance benefits. If a particular selfless trait enhances a person’s reputation, spread through the written and spoken word, it might give him an advantage in many of life’s challenges, like finding a mate. One way that reputation is enhanced is by being ostentatiously religious.

This brings us to controversial group selection part and some recent research ahs suggested that group selction after all does work.

The study of evolution is largely the study of trade-offs,” Wilson wrote in “Darwin’s Cathedral.” It might seem disadvantageous, in terms of foraging for sustenance and safety, for someone to favor religious over rationalistic explanations that would point to where the food and danger are. But in some circumstances, he wrote, “a symbolic belief system that departs from factual reality fares better.” For the individual, it might be more adaptive to have “highly sophisticated mental modules for acquiring factual knowledge and for building symbolic belief systems” than to have only one or the other, according to Wilson. For the group, it might be that a mixture of hardheaded realists and symbolically minded visionaries is most adaptive and that “what seems to be an adversarial relationship” between theists and atheists within a community is really a division of cognitive labor that “keeps social groups as a whole on an even keel.”


Symbolic systems having group selection advantages fits with our symbolic/ realistic dichotomy. On this note I'll like to end our discussion and ask readers do they see a need for religion/ spirituality now, or do they still prefer the realistic, scientific method as the only true method, even at the cost of robbing us of meaning.

Sphere: Related Content