Tuesday, October 31, 2006

color memory, stroop test and models of working memory

BPS research digest as well as Mixing Memory have both commented on a recent study that showed that our memory of colors associated with a particular object, affects our actual color perception.

As per this study, as we have normally only seen a yellow banana and that color association is quite strong in our minds, hence when we perceive a 'different' colored banana, we are bound to see it more yellowish than is the actual hue in which the different color banana is presented.

Basically, they used 2 extremely good experiments that show that when viewing a banana (which is generally yellow), the yellow color perception is automatically activated in our brains: thus a gray matched banana would appear yellowish; while the task that requires matching a pink banana to a gray background would result in a bluish-gray banana, as blue is the opponent color for yellow and blue is added to the background gray to compensate for the memory-activated yellow color perception.

It is interesting to draw parallels here with the stroop test. In this test, color words like 'red', 'yellow' etc also appear to invoke automatic activation of the corresponding color in the brain and thus interferes with the correct naming of the actual color in which the color word is presented. Developing Intelligence has a very interesting and promising post, in which he explores the current research and computation models, that seem to suggest that the mechanism underlying stroop interference is not directed inhibition of prepotent responses, but lateral excitation among color and linguistic perception modules, with color perception area of the brain being always activated when a color linguistic term is presented and in the incongruent trials more activation seen in this to-be-ignored module as the conflicting activations of color - one due to the actual color of the word and the other due to the color perception activated by the linguistic color word ('red' ) both competing against each other lead to more activation. This is in contrast to the view that the more activation is due to directed inhibition . The new explanation advocated seems also to fit with the brain anatomy, with there being only local inhibition processes and is reconcilable with a lack of long range inhibiting pathways in the neocortex.

Thus to me, it seems more and more possible that stroop effect may be due to actual 'yellowish' hue perception in brain on watching the linguistic term 'yellow'. I know that the two examples are not the same-- a yellow banana actually has yellow color and thus its memory may affect the perception of a strange colored banana; but maybe the 'yellow' linguistic term is also somehow related in our mind very strongly with actual yellow hue perception and maybe we are all synaesthetic to the extent that all of us literally see the linguistic color terms in color rather than in black-and-white (or whatever the text color).



Sphere: Related Content

Self-awareness in elephants!

As per a recent news report, it has been found that elephants too have self-awareness. The test used was that of identifying a spot on their body, when in front of mirror and observing their behavior when in front of mirror. This is a classical measure of self-awareness, though some disagree about its importance.

After apes and dolphins, Elephants also seem to have self-awareness!

The original study is available at PNAS and offers some convincing data.

Update: The video of the elephant touching the mark on her body after seeing her reflection in mirror is available at the Neurohilosopher.

Sphere: Related Content

Monday, October 30, 2006

The Synapse, spooky issue 10 : unleashed just now to scare your wits out!

The Neurocritic, has just unleashed a very viral and resistant new strain of Synapse, that is bound to keep you hooked to your monitors for quite some time. Captivating visuals accompany the best in Science reporting - from the Dilbert-Spasmodic Dysphonia connection to an exhortation to Goalies to keep their eyes on the puck.

If you do get infected , remember I warned you in advance!

Sphere: Related Content

A new issue of the Tangled Bank

Josh, of Thoughts From Kansas, has just published a brand new edition of the Tangled Banks, despite a very short notice. Kudos to him, for compiling such a nice collection of articles., some of which are focussed on Evolution.

Sphere: Related Content

Friday, October 27, 2006

Hope for the elderly: Cognitive programs that keep you fit

For those worried about Age Related Cognitive Decline or Senile dementia or the Alzheimer's, there is hope round the corner. A study published in PNAS, has demonstrated that a brain plasticity-based training that involved a 1 hr workout, 5 days a week, for 8-10 weeks, led to a significant improvement both on the training exercises as well as on measures of general auditory memory as measured by the global auditory memory scale of RBANS. The exercises were relatively simple, like syllable matching and identification; or narrative comprehension and word span; but were focused on enhancing the brain's plasticity.

While the effect of cognitive training and workout on childhood cognitive development is relatively well-documented, the effects for adults and aging population are relatively less established. Another similar study has recently shown that even computer based interventions can lead to significant increases in cognitive functioning, even for the seriously incapacitated population like those suffering from Alzheimer's. This should be of some cheer to those who are tormented by the thoughts of incapacitation in senility. If we give our brains a proper workout, we can reverse, or at least limit, the presumably 'inevitable' cognitive decline with age.

Sphere: Related Content

A gene that affects episodic memory?

A tantalizing study, published in Science, indicates that a SNP in a single gene KIBRA, could lead to a difference of as big as 25% in the outcome of a free recall test measuring the episodic memory. The KIBRA gene is expressed in the medial temporal lobe (the hippocampal region) and using fMRI the authors were able to demonstrate different levels of activation in this brain area for the carriers versus non-carriers of the T allele when they were engaged in a retrieval task.

Human memory is a polygenic trait. We performed a genome-wide screen to identify memory-related gene variants. A genomic locus encoding the brain protein KIBRA was significantly associated with memory performance in three independent, cognitively normal cohorts from Switzerland and the United States. Gene expression studies showed that KIBRA was expressed in memory-related brain structures. Functional magnetic resonance imaging detected KIBRA allele–dependent differences in hippocampal activations during memory retrieval. Evidence from these experiments suggests a role for KIBRA in human memory

This is an important work and could lead to much insight on the memory formation mechanisms involved.

Hat Tip: Small Gray Matters

Sphere: Related Content

Thursday, October 26, 2006

Gender bias in Math skills : a case of Traits Vs. Environment/Effort feedback?

A recent news article reports on a study that demonstrates that the gender bias in Math abilities may be due to environmental and cultural effects - specifically as a result of the negative self- perception garnered by the activation of the negative stereotype of women as having grossly inferior mathematical abilities than men.

The experiment involved giving 220 female study participants bogus scientific explanations for alleged sex differences in math and then having them write math tests. Those who were given a 'nature' explanation - that women have differential genetic composition than men and the cause of their low maths abilities was genetic and gender based - performed poorly on the Math tests compared to the group that was told that their math skills depended on how they were raised and were given a 'nurture' explanation and an experiential account of the sex differences such as math teachers treating boys preferentially during the first years of math education.

In the control condition some females were told that no sex differences exist while another group was reminded (primed) of the stereotype about female math under-achievement.

The worst performance was for genetic explanation females, followed by 'stereotype primed' females. Those who were given an experiential explanation performed as well or better than the control group that received the feedback that there were no sex differences in Math abilities.

While the authors analyze and explain the results in terms of the 'Stereotype theory' - that genetic explanations lead to more negative stereotypes and that activation of the negative stereotype affects performance- a more parsimonious explanations is that the differences can be explained by the same differential outcomes that are observed in people who have a genetic or trait-like versus an effort-driven or skill-like view of abilities. I have discussed previously how these differential view of abilities may develop and the experiment above has just the right conditions to induce such a differential view.

Those who were given a genetic explanation of sex differences in math abilities, may have formed a trait-like view of Math ability and were prone to see the ability as stable, genetic and immutable. This is the same view of math ability that would be formed if they had been given generic feedback - like "you are a math prodigy".

Those who had been given experiential explanations of sex differences would have been more prone to form a skill-like view of math abilities and assume that the ability could be improved and honed based on environmental inputs like proper teaching, guidance, strategy or efforts. This would have been the case if they had been given 'specific' feedback - like "you solved this math problem very well this time".

It is evident that a large part of the difference in the math test results observed in genetic vs experiential explanation conditions can be explained by the different view about math abilities that these experiments had induced. Those who were having the trait-like view of math ability would get frustrated while tackling a difficult problem and would be less resilient and effort-full while tackling the latter, more easy, problem on the test; as they would have formed a negative self-perception as one who has little mathematical talent. On the other hand, those who had been induced to form a skill-like view of math ability, would have been more resilient and effort-full when tackling latter problems, despite some early failures, as a failure would not have led to a resigned sate of mind, but would have only resulted in a belief that the strategies or effort or earlier training had not been sufficient to solve the particular problem.

It is not my contention that negative stereotype activation has no role to play- priming with stereotype words does lead to measurable effects on performance - but in this case, even if the stereotype activation is involved, the stereotype may be instrumental in activating the differential view of mathematical abilities and its effects mediated by the effects that such views have on test performances.

Sphere: Related Content

Monday, October 23, 2006

Encephalon #9 is out!

The latest edition of that fantastic brain carnival called Encephelon is now out. The ninth edition has been very nicley presneted by Dan at the Migrations.

There are special focus sections on Learning, Perception, Autism and Neuroscience, so have your pick and enjoy!

Sphere: Related Content

Friday, October 20, 2006

Synapse : the special SfN edition now online

A new edition of synapse, featuring from-the-tracks coverage of the SfN conference, is now available at the Pure Pedantry. Don't miss all the action that is taking place at the SfN.

Sphere: Related Content

Thursday, October 19, 2006

Mouse research: Genetic footprints of anxiety?

A recent study, has determined that a Single Nucleotide Polymorphism (SNP) in the BDNF (Brain deriver neurotopic factor) gene in humans, that substitutes a Met allele for Val, may be a predictor for increased susceptibility to anxiety/ depression. The study involved experiments with mouse homozygous for the Met allele and placing them in stressful situations. These mice exhibited considerable increase in anxiety over normal mouse when facing similar situations. Thus, a potential locus and mechanism for anxiety/ depression has become available.

It is interesting to note that a similar SNP that involves Met/Val substitution in the COMT gene has been implicated in schizophrenia and affects cognitive performance in frontal regions. In the COMT case though, those who have the Met allele are more fortunate, in the sense that the Val allele causes increased metabolism of dopamine and other catecholamines.

While a Met allele is good in a Schizophrenia gene, it has the reverse effect in a depression/anxiety gene! What exactly does the Met / Val difference mean for a gene?

Hat Tip: The Mind Blog

Sphere: Related Content

Alien Vs Predator : would eugenics and mate selction divide us in two?

An interesting discussion is going on at slashdot regarding the recent speculation of LSE theorist Oliver Curry, that Humans may split into two species, very much like the Elois and Morlocks conceived by H G Wells, as a result of mate selection.

As per the standard evolutionary theory of how new species are formed, it is posited that new species result form existing species, when interbreeding between two factions of the old species stops and genetic variations accumulate in isolation in the two species making them different from each other and making them further unavailable for interbreeding. The original lack of interbreeding resulting in a species split may be due to accidental genetic changes that make interbreeding troublesome or less likely (or make resultant children unhealthy and unlivable) or it may be a direct result of sexual selection and preferential mating. This theory of a new specie origination has also been experimentally verified in fruit flies.

Oliver theorizes, that sexual selection would become prominent in the near future and eventually lead to the bifurcation of the human species, and this bifurcation would be on intelligence/attractiveness lines, with more intelligent and beautiful (Elois) making one strata and the dim-witted and ugly (Morlocks) making the other strata.

This is not inconceivable as intelligence and attractiveness (things like height, beauty etc) have been found to covary in humans and people do take these factors into account while choosing mates.

An added twist to this provide by the fact that SES or wealth is related to intelligence and thus, the bifurcation would also happen along economic lines. Again, wealth and status are attributes that are heavily involved in mate selection.

But for this process to take shape, inter breedings have to be prevented, or become less and less probable and we know that we, as humans, are still not choosy and do interbreed frequently.


What could accelerate and freeze the process of genetic differentiation between the species is the modern genetic research that may once more lead to eugenics-style human-enhancement efforts, with rich having more of these tools at their disposal vis-a-vis the poor. This is exactly the point that Peter Singer makes in his editorial "Gene Therapy" in today's TOI and comes to a similar conclusion that we may be doomed to a split down the line.

I had speculated on something similar some time back: but my reasoning was more guided by evolutionary pressures that our ancestors might have faced during the EEA and whether that had laid the foundation for the split of human lineage. To be precise, I had speculated that the different foraging styles that our ancestors adopted during the EEA had lead to the evolution of different personality traits consistent with that personality ( there was some research that indicated that a foraging style based on begging or nagging the compatriots incessantly to give food might have had low Agreeableness associated and resulted in the emergence of an Agreeableness trait). Further, once people started assuming a certain foraging and personality style, they might have interbred within that class leading to the emergence of that trait in that population.

Fortunately, once the EEA pressures were over the populations mixed with each other and thus the personality traits dispersed in the population. There is not much evidence to back this theory, but it highlights one important point: there has to be environmental pressure on the species that makes them breed selectively and leads to emergence of new traits. If humanity manages not to screw itself ( by nuclear catastrophe or whatever) , then I cannot see any environmental pressures that would enforce the lack of interbreeding. We can thus sleep assured that we are not going to split in two. There will always be that quirky beautiful lady that marries the dumb ugly squat- motivated solely by that elusive thing called love- and not giving a damn about confirming to the standard sexual selection model- as long as we can ensure that we do not subject her to the evolutionary pressures faced by her ancestors and which have become mostly useless since the time we humans have started controlling our environments.

Update : An interesting sum-up of all the prominent blog postings debunking this claim has been compiled by Coturnix at A Blog Round the Clock. It is interesting to note that while John Wilkins, disagrees with the analysis because he thinks that human speciation, if it happens, will happen due to isolation (Allopatric speciation) and lack of interbreeding and that sympatric speciation is not relevant to us; John Hawks takes a completely different take and assumes that if human divergance can take place, it would be most likely sympatric and requiring natural selection against intermediate phenotypes. He rules out the possibility of all Morlocks shipped to an island and being isolated as a likely scenario! He does mention some intricacies involved in assortative mating and sympatric speciation which are worth musing over. The take home is that we are not going to split!!
My own take, had more focussed on Parapatric speciation, in which environmental pressures are a key factor. Key and drastic environmental changes clubbed with partial isolation (occupation of niches by daughter species) and the resultant selective interbreeding is posited as the mechanism here, and does not require either complete geographic isolation of the two diverging species (required in allopatry) or the requirement that the those heterozygous at the differentiating gene locus have less reproductive fitness compared to those who are homozygous (the sympatry requirement) .

Sphere: Related Content

Wednesday, October 18, 2006

Should you read my blog or my short-stories/poems?

BPS research digest has reported on an interesting study that has found that lifetime reading of fiction leads to enhanced empathetic abilities, while reading non-fiction predominantly leads to the converse effects. although, the study suffers from some limitations ( the usual correlation is not causation and empathetic people might be more drawn to fiction rather than it being the other way round) as well as methodological constraints ( it used familiarity of fiction and non-fiction writers' name as a criterion for exposure to that genre: by this measure I would do well in both cases, as I was a very prolific fiction reader earlier, but in the recent years have been reading non-fiction almost exclusively...so my familiarity with fiction authors doesn't reflect my current fiction exposure), but still the results are tantalizing and the implications profound.

For me that raises the question of whether I should also occasionally post some of my short stories on this blog, in a bid to balance the drop in empathy that my readers will undergo by reading my non-fiction!!

There is another interesting study highlighted in this week's BPS digest, that reveals that a thicker corpus-callosum is required for a right-brain hemisphericality (is it that a thicker corpus callosum ensures that the right mechanism is in place (more communication between the hemispheres) to ensure that the more feminine, talkative :-) , holistic right brain is able to become dominant ? Or is it the other way round, that right brain dominance causes more interconnections between the hemispheres and leads to a thicker callosum?)

Sphere: Related Content

Steps for the evolution and devlopment of languages

There is an interesting post at the Babel's Dawn, highlighting the work of David Rose in relation to SFL.

As per David, some pre-requisites are required for the evolution and development of languages as we know them.

Four conditions are suggested for developing explanatory models that may account for these linguistic phenomena. These include (a) a mechanism for reproducing complex cultural behaviors intergenerationally over extended time, (b) a sequence by which articulated wordings could evolve from nonlinguistic primate communication, (c) extension of the functions of wording from enacting interpersonal interactions to representing speakers’ experience, and (d) the emergence of complex patterns of discourse for delicately negotiating social relations, and for construing experience in genres such as narrative. These conditions are explored, and some possible steps in language evolution are suggested, that may be correlated with both linguistic research and archaeological models of cultural phases in human evolution.

Edmund Bolles summarizes this as below:

Rose’s four steps required for the growth and survival of language are:

  1. reproducibility: along with the “suite of biological adaptations” for speaking, there has to be some “mechanism” for precisely reproducing the language that happens to be spoken wherever one happens to be born. Many inquiries into language acquisition assumed this reproducibility is purely biological, but Roses insists that language is reproduced across generations “by cultural means.” In other words, children learn language from their elders. We will see on this blog that this explanation is not accepted quite as widely as a novice might think. One thing is clear, we got this skill after we said goodbye to the chimpanzee’s line of descent.
  2. exchangeability: Once speakers have the ability to reproduce words they can “exchange” them. Rose takes the idea of an exchange of words more literally than I do; thus he talks about “exchange behavior” in primates, but the basic idea of being able take and modify one another’s existing words to create new ones appears sound enough. The interesting thing about such interactions is that both parties in the exchange “get” it. The usage is understood as a bit of wit or cleverness rather than as an error, so wit too is something added to our species when we had parted from the surviving primates.
  3. extendibility: one very peculiar quality of humans is what a resourceful species we are, able to turn established tools to new tasks as the purpose demands. A digging tool becomes a backscratcher becomes a probe. Equally, we can extend the uses of our verbal tools. Thus, words which were surely first “exchanged” as tools for interpersonal actions could be extended for use in expressing ideas and then extended again to be used in thinking through some complex set of ideas. At this point biology is left in the dust as the role of language is extended at a pace that far outdistances plodding natural selection.
  4. combinability: the various extensions of speech can be combined to produce still more verbal wonders, such as stories and polite behavior that lets people negotiate delicate situations without giving offense. At this point we can speak of craft, maybe even artistry. Speech, thought, and culture has moved so far from its primate roots that the idea of common descent becomes surprising.
To me these bring to mind the more genetic and physical (as opposed to the cultural based that Rose presumes them to be) pre-requisites for language, in particular, and symbolic manipulation in general, that Premack had outlined recently. I had commented on the same earlier by integrating those with the existing stage-based developmental model of language evolution/development.

I'll briefly recap the pre-requisites that Premack had identified:

  • Voluntary Control of Motor Behavior. Premack argues that because both vocalization and facial expression are largely involuntary in the chimpanzee, they are incapable of developing a symbol system like speech or sign language.
  • Imitation. Because chimpanzees can only imitate an actor's actions on an object, but not the actions in the absence of the object that was acted upon, Premack suggests that language cannot evolve. .
  • Teaching. Premack claims that teaching behaviors are strictly human, defining teaching as "reverse imitation" - in which a model actor observes and corrects an imitator.
  • Theory of Mind. Chimps can ascribe goals to others' actions, but Premack suggests these attributions are limited in recursion (i.e., no "I think you thought he would have thought that.") Premack states that because recursion is a necessary component of human language, and because all other animals lack recursion, they cannot possibly evolve human language.
  • Grammar. Not only do chimps use nonrecursive grammars, they also use only words that are grounded in sensory experience - according to Premack, all attempts have failed to train chimps to use words with meanings grounded in metaphor rather than sensory experience.
  • Intelligence. Here Premack suggests that the uniquely human characteristics of language are supported by human intelligence. Our capacity to flexibly recombine pieces of sensory experience supports language, while the relative lack of such flexibility in other animals precludes them from using human-language like symbol systems.
To me, the Imitation and Teaching seem to be the cognitive mechanisms by which reproducibility of languages across cultures and generation is ensured.

Theory of mind abilities would definitely be utilized and instrumental in the process of excahngeability, whereby one can use tokens like words to exchange meanings. For this mechanism to evolve, an ability to understand that others have mental states that are similar to us is necessary and only then can one comprehend what that person means when he uses a particular token. Also, the mirror system , that might be involved in ToM module , may also be sufficient to explain the evolution of linguistic words from non-linguistic communication.

Grammatical abilities like recursion and ability to use metaphors can be directly mapped to the capabilities like combinability and extendability, whereby complex linguistic devices can be combined to produce complex discourses and novel metaphors used for extending the semantics associated with a word.


I'm quite intrigued and excited by such commonalities! Does this excite you too? Let me know via comments.

Sphere: Related Content

Monday, October 16, 2006

Belief about Intelligence : how it affects performance and how it is formed

Affective Teaching keeps posting some interesting basic cognitive tutorials and their latest one deals with the different concepts people have regarding intelligence and how that affects performance and attitudes.

As per that tutorial, people can either have fixed (entity) or trait-like view of intelligence/ abilities or a changeable (incremental) or skill-like view of intelligence/ abilities. Interestingly, those with fixed view are more prone to learned helplessness, external locus of control, less persistence and lack of use of learning strategies. On the other hand those with changeable view of intelligence are more persistent, having a mastery goal or orientation and apt to use learning strategies and credit success to effort and strategy.

This same difference in attitudes and outcomes was predicted by my recent blog post where I analyzed the differential effects of providing generic (person based) versus specific (outcome based) feedback and praise. It was surmised that this would lead to differential view of intelligence/abilities as being trait-like or skill-like in nature. It is heartening to note that existing research supports such a differentiation in the conceptualization of intelligence by individuals and also predicts accurately the different outcomes based on different underlying conceptualizations.

It should thus be clear that providing the right sort of feedback to the child is very important so that they hook on to the right conceptualization of intelligence early on. This may also go long way in settling the expertise debate: genius have a mastery orientation and an incremental view of intelligence which is different form the normal trait-like view held by most people. Thus, it is not just the case that that they are either more talented or just better learners (although they are both) ; they also have a different attitude- and a different underlying concept of intelligence/ability- which is very much a result of the environmental feedback they received in childhood ans is instrumental in making them what they are.

Sphere: Related Content

Is low IQ the cause of income inequality and low life expectancy or is it the other way round?

As per this post from the BPS research digest, Kanazawa of LSE has made a controversial claim that economic inequality is not the cause of low life expectancy, but that both low life expectancy and economic inequality are a result of the low IQ of the poor people. The self-righteous reasoning is that people with low IQ are not able to adapt successfully to the stresses presented by modern civilization and hence perish. He thinks he has data on his side when he claims that IQ is eight times more strongly related to life expectancy, than is socioeconomic status. What he forgets to mention(or deliberately ignores) is growing evidence that IQ is very much determinant on the socioeconomic environment of its full flowering and a low IQ is because of two components- a low genetic IQ of parent plus a stunted growth of IQ/intelligence due to impoverished environment available because of the low socio-economic status of the parents.

A series of studies that I have discussed earlier, clearly indicate that in the absence of good socioeconomic conditions, IQ can be stunted by as large as 20 IQ points. Also discussed there, is the fact that the modern civilization as a whole has been successful in archiving the sate of socioeconomic prosperity that is sufficient for the full flowering of inherent genetic IQ of a child and as such the increments in IQ as we progress in years and achieve more and more prosperity (the Flynn effect) has started to become less prominent. This fact also explains the Kanazawa finding that in 'uncivilized' sub-Saharan countries the IQ is not related to life expectancy, but socio-economic status is. although, he puts his own spin on this data, a more parsimonious ( and accurate) reason for this is that in the sub-Saharan countries, even the well -of don't have the proper socio-economic conditions necessary for the full flowering of IQ and thus the IQ of both the well-off and poor parents in these countries is stunted equally. Thus, the well-off (which are not really that well-off in comparison to their counterparts in the western countries) are not able to be in any more advantageous position (with respect to IQ) than the poor in these countries. The resultant life expectancy effect is thus limited to that directly due to economic inequality and the IQ mediated effect of economic inequality is not visible.

What Kanazawa deduces from the same data and how he chooses to present these findings just goes on to show the self-righteous WASP attitude that many of the economists assume. After reading Freakonomics, and discovering how the authors twist facts and present statistics in a biased manner to push their idiosyncratic theories and agendas, it hardly seems surprising that another economist has resorted to similar dishonest tactics - shocking people by supposedly providing hard data to prove how conventional wisdom is wrong. Surprisingly, his own highlighting of sub-Saharan counties data that shows that life-expectancy is highly dependent on socio-economic conditions in these countries is highly suggestive of the fact that in cultures where the effects og economic inequality are not mediated via the IQ effects, economic inequality is the strongest predictor of low life expectancy.

Instead of just blaming the people for their genes/ stupidity, it would be better to address the reasons that lead to low IQs and when they are tackled, directly address the social inequality problem , as in the author's own findings, when IQ is not to blame for the low life expectancy, the blame falls squarely on economic inequality (as in the sub-Saharan countries data) .

Sphere: Related Content

Friday, October 13, 2006

Tanlged Bank #64


The 64th issue of Tangled Bank is now up at The Neurophilosopher.

This is the first time that I submitted to the carnival, and would like to take this opportunity to welcome the regular readers of Tangled Bank to this blog. Do take the readership poll on the sidebar and let me know in the comments as to what sort of content you would like to see more often.

Sphere: Related Content

Readership survey

I am experimenting with Polls at present and would like to start with a simple poll to find the readership composition of this blog. So please take some time to answer this simple poll . This would not only help me know my readers better but would hopefully result in better content suited to your needs and also enable more interaction in future via polls.

You can access the poll on the sidebar.

Sphere: Related Content

generic vs specific feedback and the fundamental attribution error

A recent study indicates that giving generic trait-based feedback to children ( in the form of "you are a good drawer") increases feeling of helplessness on subsequent mistakes/failures and reduces their resilience in the face of failure in comparison to the condition in which they are given specific outcome-based feedback (of the form " you drew a good drawing"). It is thus apparent that when generic praise is given, then this results in a stable inborn talent-like view of the self abilities, while a specific praise enforces more a concept of skill-based self ability that may be affected by circumstances and can be worked on and acquired.

Generic praise implies there is a stable ability that underlies performance; subsequent mistakes reflect on this ability and can therefore be demoralizing. When criticized, children who had been told they were “good drawers” were more likely to denigrate their skill, feel sad, avoid the unsuccessful drawings and even drawing in general, and fail to generate strategies to repair their mistake. When asked what he would do after the teacher’s criticism, one child said, “Cry. I would do it for both of them. Yeah, for the wheels and the ears.” In contrast, children who were told they had done “a good job drawing” had less extreme emotional reactions and better strategies for correcting their mistakes.

It is interesting to read this along with the fundamental attribution error, which was the theme of my blogger SAT challenge essay. As per this bias, people have an inherent bias to view their successes in terms of stable underlying talents/traits and failures as reflective of external circumstances. The reasoning reverses when applied to others. Others fare well due to luck (or external circumstances) and fare badly due to dispositional elements.

From the above study, it is clear that though the fundamental attribution error may serve us well (after all it has to serve a purpose for it to evolve), say by increasing our feelings of self-efficacy and thus leading to greater confidence/esteem, yet it has its downside. It makes learning from our mistakes harder and leads to feelings of helplessness or that of external locus of control, when faced with failures. This rationalization of failures due to our helplessness (despite perceived stable talent/trait) , and due to the external circumstances ( and not as due to some carelessness or lack of effort on our part on this specific circumstance) also leads to less resilience in the face of failures and less motivation to indulge in similar activity in the future.

It is apparent thus, that while giving positive feedback to children, it is framed in specific outcome based terms, so that they do not fall prey to the fundamental attribution bias and pay more emphasis on skill-based accounts rather than talent-based accounts. Conversely, it may be plausible to presume that while giving negative feedback it would be best to be direct and point any underlying issue that the child may have and not gloss them over by providing environmental explanations. The child would anyway make up environmental excuse for the failures!

While inspiring the child to undergo observational learning, one should presumably describe others and their success as resulting from stable traits/ skills and should explain their failures due to circumstance not in their control. This would go a long way in making the child overcome his inherent attribution bias and help in lead to a generally positive and compassionate view of others and a resilient and humble view of himself.

Sphere: Related Content

Wednesday, October 11, 2006

Language and Cognition: a developmental framework revealed by color term analysis

There have been various claims about the ability of language to shape thought and perception, and one of the oft-cited phenomenon supporting this sapir-whorf hypothesis is the evolution of color terms in languages and how the lack of a color term in a language may influence the ability of that language user to make categorical distinctions between colors or to perceive the differing colors.

The basic color terms were originally proposed by Berlin and Kay (1969) in their seminal study 'Basic Color Terms, their Universality and Evolution' in which they proposed that different languages (written/ oral) have evolved to differing levels and that a culture would start with only two color terms, equivalent to black and white or dark and light, before adding subsequent colors closely in the order of red; green and yellow; blue; brown; and orange, pink, purple, and gray. Based on this they proposed a grouping of the ninety-eight languages studied into seven stages of an evolutionary sequence running from primitive languages with words only for WHITE and BLACK to more advanced languages with words for the whole range of colors.

  1. STAGE I : WHITE BLACK: Nine languages:7 New Guinea 1 Congo 1 South India
  2. STAGE II: WHITE BLACK RED: Twenty-one languages:2 Amerindian 16 African 1 Pacific 1 Australian Aboriginal 1 South India
  3. STAGE III
    1. STAGE IIIa: WHITE BLACK RED GREEN: Eight languages:6 African 1 Philippine 1 New Guinea
    1. STAGE IlIb: WHITE BLACK RED YELLOW:Nine languages:2 Australian Aboriginal 1 Philippine 3 Polynesian 1 Greek (Homeric) 2 African
  4. STAGE IV: WHITE BLACK RED GREEN YELLOW:Eighteen languages:12 Amerindian 1 Sumatra 4 African 1 Eskimo 380
  5. STAGE V: WHITE BLACK RED GREEN YELLOW BLUE:Eight languages:5 African 1 Chinese 1 Philippine 1 South India
  6. STAGE VI : WHITE BLACK RED GREEN YELLOW BLUE BROWN:Five languages:2 African 1 Sumatra 1 South India 1 Amerindian
  7. STAGE VII: COMPLETE ARRAY OF COLORS :Twenty languages: 1 Arabic 2 Malayan 6 European 1 Chinese 1 Indian 2 African 1 Hebrew 1 Japanese 1 Korean 2 South East Asian 1 Amerindian 1 Philippine
This schema of classification has been revisited in light of recent research, mostly the World Color Survey, and Kay and Maffi (1999), in Color Appearance and the Emergence and Evolution of Basic Color Lexicons, discuss the results to come up with a five stage developmental model of languages based on black, white, red, yellow, green, blue terms only and leave from the analysis other basic terms like brown, orange, purple and pink.

Their stages of languages are essentially the same as that of Berlin and Kay with stage IIIA (White, black, red, green) being more conman than stage IIIB (White, black, red, yellow) in the stage III languages.

Cognitive Daily ran a recent commentary on the World Color Survey , and as per the analysis presented there, it is apparent that the 41 languages covered there belonged to the stage V languages and the rest 69 languages belonged to stage IV languages (and in these languages as no separate word for Blue is present, hence the blue-green color is perceived as same and also labeled as Grue i.e. Blue and green are confused. The results that across cultures, people, if they have a term for a particular color in their language, then they do agree to the actual color hue that the color term corresponds to, across cultures, is a strong argument in favor of universality of color categories. thus, the blue of one language is the same as the blue of the other language and this is most probably due to the underlying physiology. See my blog posts related to color perception in humans in this regard.

Conversely, the fact that those languages that had no term for blue (but only had a common term Grue for blue and green), also found it difficult to distinguish between blue and green hues, suggests that having a term for a color does influence the way in which we categorize the colors and possibly also the way we perceive them. The latter (influence on perception) may be a more controversial claim, but the fact that color terms affect cognition (categorization) is relatively uncontroversial.

It is instructive to pause here, and note some facts from color vision physiology. The rods give us an ability to see even in dark and may have been the first to evolve, giving us the concepts of black and white. The cones may have evolved later to give a sense of colors. The opponent process utilizing Red cones and green cones gives rise to the perception of colors Red and Green. It is plausible that first the Red cones evolved (in evolutionary time-frame), giving a Red signal and thus a Red qualia/ Red color term. Later came the green cones to give a green signal and a green qualia/ green color term. The R+G opponent process was born later and refined the perception of Red and Green. It is also plausible that the brain started combining Red and Green signal (R+G) to perceive Yellow. Thus , a perception of Red, Green and Yellow would be generated by the brain, based on the two Red and Green cones only. The R+G =Y signal does exist in the brain and is one of the signals involved in the opponent processes of Blue-Yellow perception. The Blue cones apparently came the last and using the signal from blue cone and the Y=R+G signal, the opponent process of Blue-Yellow perception enabled, the perception of Blue qualia too and a corresponding color term for Blue too. Further, it is instructive to note that brown color (the stage V to stage VI transition of languages based on color terms) is perceived in the brain by a complex process involving signals from both R-G and B-Y opponent processes (specifically mixing of Red and Yellow at a point in space to give orange) and comparing and contrasting this information with the intensity (Black-white achromatic signal) of the surrounding region. This, leap from opponent-processes to a perception based on contrast with surrounding areas, marks a significant leap ( as is common in all developmental stage VI transformations) in perceptual mechanism employed and correspondingly the terms for Brown are more rare and difficult to be claimed as being universal in all languages and must have evolved later. The stage VII and VIII perceptual processes may determine how we perceive purple, pink, orange and gray but a more physiological analysis of perceptual mechanism involve would have to wait for another day, and by another more informed vision researcher. Here it would suffice to note that there are sound physiological reasons for why the color terms may have evolved in the way did over historical and evolutionary time scales and how some modern languages may still not be having terms for some colors the ability to distinguish which might have evolved recently and based on the different perceptual processes involved may not be the same in all cultures.

Before speculating further, it would serve us well to get acquainted with the latest consensus regarding the color terms and what they inform us regarding language and cognition. Kay and Regeir (2005) in their article Language, thought, and color: Recent developments, TICS , aptly summarize the state of the art view that involves an interactionist view where both Nature and Nurture, Universalism and Relativism have their place and are involved. As per them,

The language-and-thought debate in the color domain has been framed by two questions:
1. Is color naming across languages largely a matter of arbitrary linguistic convention?
2. Do cross-language differences in color naming cause corresponding differences in color cognition?

In the standard rhetoric of the debate, a ‘relativist’ argues that both answers are Yes, and a ‘universalist’ that both are No. However, a number of recent studies, when viewed in aggregate, undermine these traditional stances. These studies suggest instead that there are universal tendencies in color naming (i.e. No to question 1) but that naming differences across languages do cause differences in color cognition (i.e. Yes to question 2).



We have already seen how the concept of Focal colors (as outlined by Kay) is valid and seems to constitute a universal cognitive basis for both color language and color memory. Further, we have seen some neuro-physiological support for the emergence of focal colors red, yellow, green, blue and brown. Jameson and D’Andrade have argued that the universal focal colors are
salience maxima in color space and that universals of color naming flow from a process that partitions color space in a way that maximizes information. A recent study by Griffin LD (2006), The Basic Colour Categories are optimal for classification. J Roy Soc: Interface 3(6):71-85, seems to support this hypothesis and posits that the eleven basic color categories identified by Kay are optimal and useful in computer machine vision too. All these evidences are compatible with each other and suggest that the basic properties and number of color categories, compatible with optimal color space partitioning, have led to the emergence of corresponding neuro-physiological/ perceptual apparatus in humans to detect these categories, and has thus led to that many number of color terms to evolve in the degree of complexity of these mechanisms/ incremental advantage they provide in categorization.

On the relativistic side it is claimed, that the cognitive variables like privileged memory, similarity judgments, or paired associates learning for focal colors are well predicted by the boundaries of each language’s color categories: a form of categorical perception of color. Since these boundaries vary across languages, speakers of different languages apprehend color differently. Moreover, these linguistic differences seem to actually cause, rather than merely correlate with, cognitive differences.The argument is further that color terms are arbitrary and the color terms determine the perception of colors absolutely. Roberson, Davidoff et al, in Color Categories are not universal: New evidence from Traditional and Western cultures , argue that the evidence supporting focal colors and the concept of universal categorical perception arising from them, . viz privileged memory for them or paired associate learning for the proposed universal colors, is rendered incorrect, when the effect of verbalization (or use of linguistic tokens) is taken into account. As per them (emphasis mine) :

In native English speakers a series of experiments found that verbal interference selectively removed the defining features of Categorical Perception. Under verbal interference, there was no longer the greater accuracy normally observed for cross-category judgments compared to within-category judgments. It thus appears that while both visual and verbal codes may be employed in the recognition memory of colors, subjects only make use of verbal coding when demonstrating Categorical Perception (Roberson & Davidoff, 2000). In a brain-damaged patient suffering from a naming disorder, the loss of labels radically impaired his ability to categorize colors

Participants from a traditional hunter-gatherer culture, whose language contains five basic color terms (under the definition of Kay Berlin & Merrifield, 1991), showed no tendency towards a cognitive organization of color resembling that of English speakers. They did not find best examples of English color categories easier to learn or remember than poor examples and, in a further set of experiments, evidence of Categorical Perception was found in both languages, but only at their own linguistic category boundaries.

Although the authors draw extreme conclusions from their findings, but Kay moderates the viewpoint and concludes: (emphasis mine)

It has been widely assumed that language is the cause of color categorical perception. This is suggested since – as we have seen – named category boundaries vary across languages, and categorical perception varies with them. However, Franklin and Davies have found startling evidence of categorical perception at some of these same boundaries in pre-linguistic infants and toddlers of several languages. Thus, some categorical color distinctions apparently exist prior to language, and may then be reinforced, modulated, or eliminated by learning a particular language.


This finally brings us to the post by Developing Intelligence regarding labels as an accelerator of ontological development. In this, though in the beginning itself, Chris dismisses the strong form of Sapir-Whorf hypothesis (esp. in relation to colors) , he presnts a study that leads to a reasonable conclusion that language can accelerate the process of sortal/kind discrimination, such that a skill normally only demonstrated by 12-month-olds was in this case demonstrated by 9-month-olds with the proper linguistic input. Here, one is not arguing that the sortal/kind discrimination would not have been possible in the absence of linguistic inputs- one is merely claiming that the sortal/kind discrimination is facilitated by language and happens early in the developmental cycle based on linguistic labels. And definitely not having labels leads to a different cognitive/ perceptual experience in the infants as compared to those infants who use labels and can make the sortal/kind discrimination.

Form the above, it may be inferred, that though universal focal colors and color categories do exist (based on underlying neurophysiology or spectral properties of the visible-to-humans world), they may be available to consciousness at different stages of an infant's (or a culture or a language's ) development, and having labels or color terms for the categories may facilitate an early maturation of the color categorizations faculty. Depending on where a culture, or language is on its developmental path, lack of proper color terms may limit their ability to perceive colors as belonging to different categories for which they don't have a label.

Interestingly, in the Davidoff study, a brain damaged patient suffering from an inability to label things, was impaired in categorizing colors.

Though the exact mechanism by which labels or color terms may work is still elusive with multiple competing hypothesis (viz., labels facilitate sortal/kind distinctions by aiding a domain-general, non-linguistic process, such as memory; or that labels increase the salience of perceptual feature differences between object) , yet it is clear that labels are instrumental and play a definitive role in the ontological development of the child.

One may take a strong line and argue, that in the absence of color terms or labels, one would not be able to have a full cognitive color categorization or sortal/kind discrimination experience, but even if one does not subscribe to the extreme view, it seems plausible that different developmental levels of languages identified by the linguistic color terms in the languages correspond to different levels of cognitive experiences that are more readily available in the corresponding culture.

Thus, while language does affect thought and vice versa, both may be constrained by the developmental stage at which a culture is. The cognitive experience and the cognitive developmental stage from which that experience results would correspond to the stage of development of that language and vice versa. Thus, some cultures, by not using a language that is fully evolved/ developed, may not be experiencing the full range of cognition and emotion that is humanly possible. Conversely, based on the linguistic devices utilized by a culture, their cognitive experiences may differ from another culture that utilizes another incompatible set of linguistic devices.

Sphere: Related Content

Cognitive and Physical Fitness

In an interesting study, it has been found that high BMI (or excess body weight) in middle-aged adults is linked to cognitive decline. Though the experts have been focusing on a physical causal relationship (mediated by effects of lack of physical exercise on blood vessels / insulin), another plausible hypothesis is that those who have the personality attributes that dispose them towards laziness and a propensity towards lack of physical exertion/ exercise, may similarly be disinclined to use their cognitive capacities to the fullest and exhibit mental laziness too. As the evidence for 'use it or lose it' in relation to cognitive capacities is mounting, the 'lazy'/ 'careless'/ 'challenging' attitude may be the underlying factor reflected in both physical decline (obesity) as well as cognitive decline.

A brain fitness movement currently seems to be gaining momentum, and a new blog SharpBrains has expertise in precisely that niche. They are running a survey and you can let the authors know what content you will like to be featured more on that site. Exercise your brain to the fullest, but don't neglect that good old physical regimen, as it may have a determining effect too.

Sphere: Related Content

Monday, October 09, 2006

Encephalon University carnival online now

The new edition of Encephalon is online now at the Cognitive Daily. I was pleasantly surprised to discover that a first honorary doctorate (professorship!) has been granted to me by this esteemed university. Go on to the university carnival to read gems from my fellow emeritus professors at the Encephalon University.

Sphere: Related Content

Thursday, October 05, 2006

Attention/ Memory/Learning: double dissociation between ACC and PFC

I recently came across two studies both of which were pointing towards a double dissociation between ACC and PFC, in the realm of Working Memory Attentional processes in one case and the learning mechanisms (or acquisition and performance of a cognitive skill) in the other case.

In the first study by Kane and Engle, a stroop interference task was used to find the different attentional factors at work that determine the successful execution of the task. Using some clever experiments, it was demonstrated that two selective attentional mechanisms were involved- one that was related to goal maintenance and was active pre-stimulus presentation and the other that was active post-stimulus presentation and was related to inhibition of inappropriate bottom-up responses (the automatic response as per the linguistic color-denoting word in incongruent condition instead of as per the actual color of the word that was demanded by the task) and selection of relevant response from the competing responses.

As per the abstract of the study:

Individual differences in working-memory (WM) capacity predicted performance on the Stroop task in 5 experiments, indicating the importance of executive control and goal maintenance to selective attention. When the Stroop task encouraged goal neglect by including large numbers of congruent trials (RED presented in red), low WM individuals committed more errors than did high WM individuals on the rare incongruent trials (BLUE in red) that required maintaining access to the "ignore-the-word" goal for accurate responding. In contrast, in tasks with no or few congruent trials, or in high-congruency tasks that followed low-congruency tasks, WM predicted response-time interference. WM was related to latency, not accuracy, in contexts that reinforced the task goal and so minimized the difficulty of actively maintaining it. The data and a literature review suggest that Stroop interference is jointly determined by 2 mechanisms, goal maintenance and competition resolution, and that the dominance of each depends on WM capacity, as well as the task set induced by current and previous contexts.

As per this line of reasoning, errors in the stroop task are thought to result from failure to actively maintain a goal in mind and may thus be related to memory retrieval per se. On the other hand, reaction time slowing is thought to result from a post-stimulus attentional process - a failure to quickly bias competition towards the correct representation rather than the incorrect representation and might also be perceived as an attentional control mechanism- whereby attention is not diverted to irrelevant stimuli that are not consistent with the goal in WM.

Developing Intelligence presents some additional observations to bolster the argument:

  • Across all subjects, the amount of RT facilitation (i.e., how much faster congruent trials are than neutral trials) correlates with error interference (i.e., how much more accurate neutral trials are than incongruent trials), suggesting that goal maintenance failure is behind both of these phenomena. In contrast, there is no correlation between the RT facilitation effect and RT interference, as would be expected if goal maintenance failure actually gives rise to all of these measures, nor is there a correlation between error & latency interference. The implication being that errors (and the related RT facilitation) are due to one process and response time latency/interference due to another process involved in attending to ambiguous (multiple response generating) stimuli.
  • On high-congruency Stroop tasks, schizophrenics show increased errors on incongruent relative to congruent trials, and increased facilitation on congruent relative to neutral trials. The implication being that in schizophrenics only one of the attention mechanism is selectively dysfunctional - that related to goal maintenance. As presumably, schizophrenics do not show abnormal patterns of reaction times (except for increased RT facilitation on congruent trails governed by the lack of maintenance of goal - 'ignore-the-color') , thus, the second mechanism involving selection of competing responses is intact.
  • ERP studies of Stroop tasks have identified a wave that may originate from anterior cingulate (ACC) and appears to correspond to response selection and competition processes; in contrast, the activity of a different wave up to 800 ms before stimulus presentation predicts correct performance on the next stimulus (and appears to originate from polar or dorsolateral frontal cortex [dlPFC]). The implication being that dissociated brain regions are involved in priming for the response (goal maintain ace) and selection of response ( conflict resolution - inhibition of inappropriate responses)
  • Event-related fMRI shows a strong negative correlation between delay-period dlPFC activity and Stroop interference, whereas ACC activity is tied to the presentation of incongruent stimuli. The implication being that PFC activity is related to errors and thus the process of goal maintenance, while ACC activity is related to peculiarities arising from incongruence - that is when competing responses are available- and thus tied to the process of response selection (inhibition of inappropriate response).

A clinching observation that could seal the argument about two dissociated mechanisms would be observing a correlation between errors on incongruent trials under 0 congruence condition (where the effect of goal maintenance has been effectively factored out by forcing subjects to keep the goal in mind on every trial), or better to display the goal (the rule that you have to choose as per the color and not the linguistic word) while the stimuli are presented to ensure that the goal is maintained constantly, and observe the correlations between errors in preceding condition and response time latencies/interference in the normal stroop task. This correlation would ensure that there indeed is an independent attentional mechanism that is independent of goal maintenance and is dependent only on conflict resolution.

In the second study by Fincham and Anderson, a learning paradigm was used whereby some sports names were associated with some arithmetical rules (that were either implicitly learned or explicitly told) and in the trials the subjects were required to retrieve the rule and apply it. There were four conditions - a visible-rule and rule-retrieval condition (supposed to measure the effects of the rule retrieval process) and a reverse/ forward calculation condition (supposed to measure the effect of rule complexity condition - a forward-reverse application of rule introduces another control step).

The authors discovered that in the first experiment, where there were four different trial conditions, recall (rule-retrieval condition) had a significant effect on both latencies and errors, they also found (but glossed over) a minor effect found of direction (or complexity of rule application) on the errors and latency and found no recall by direction interaction. Thus, it is evident that recall (or rule retrieval) and direction (or rule complexity/manipulation) are two different factors affecting performance. However imaging studies were not that helpful. Instead of finding a selective ACC activation effect linked to direction (as per their proposal of ACC as an attentional control region) and a selective PFC activation effect linked to recall (as per their proposal of PFC as a region involved in retrieval), they found that both recall and direction had effects on ACC and PFC activations.

Their second experiment was done with the purpose of dissociating the recall (retrieval) and direction (control) components. However they confounded the study by simultaneously introducing two variables- an additional direction task supposedly requiring an additional control step and not affecting retrieval at all, and a practice variable supposedly only affecting recall (retrieval) and not affecting control (rule manipulation) at all. This however cannot be taken for granted. All 3 trials in this experiment were recall trials. They present results for initial trial, a forward direction trial after some practice and a reverse direction trial after some practice. In my opinion, they should also have provided a simple direction-neutral trial after some practice. Comparison between this and the initial trial (which were same in all respects accept for practice) would have enabled a conclusive association of practice with retrieval ease and with decrease in PFC activation.

Even if the two practice trials (reverse + forward combined) are taken as a substitute for that direction- neutral practice trial (which they unfortunately did not conduct), still one can only derive the decrease in PFC activation due to practice (or ease of retrieval) relationship as a conclusion of this study. The increase in ACC that they observed between the initial trial and the final trials (involving reverse/ forward direction manipulation) are the same results that they observed in experiment one (whereby forward and reverse manipulations in both recall/ explicit condition led to more errors/ latency/ ACC activation) . They prefer to explain this as implying that ACC activation was required because an additional control step was involved); a more parsimonious ( and more in line with the current views of the functions of ACC) explanations is that when the reverse/ forward direction condition is added , then the stimuli that is presented (and which also contains the cue as to in which direction the calculation needs to be done) leads to a stroop-like default automatic forward direction application of the rule and ACC activity is required to choose between the competing responses (if reverse direction cue is present than forward direction response needs to be inhibited). This would predict more RT and errors in the reverse condition (incongruent trials) as opposed to forward conditions (congruent trials). One can even have some control conditions whereby novel sports words (with a novel explicit rule with no directionality associated with it) are displayed in some trials and reactions times and errors measured on these. If the resulting results are same as in Stroop task, perhaps the same mechanisms are in work.

The greater activation in ACC could also be, paradoxically, due to practice. To rule this out, the initial trials having both forward and reverse direction conditions should be compared with later reveres and forward direction trials after practice. Only if no increase in ACC activity is found that can be attributed to practice alone, can the increase in ACC be attributed to the additional control step that was supposedly introduced in experiment 2. A possible scenario where practice could influence ACC activation (and post stimulus response selection mechanism) is where practice or learning could lead to greater salience of activated goal or a stronger top-down expectation resulting in a stronger inhibitory signal for any stimulus that doesn’t meet the top-down expectations. It is not unreasonable to suppose that strength of a rule (the probability with which that rule has been ingrained in memory) may directly reflect in the strength of the biasing that is a result of a top-down expectation of that rule application. In that case , ACC may paradoxically be more and more activated as a result of practice (as the response expectation associated with the stimulus increases in habit strength though learning) to bias the response selection more strongly in favor of the expected response (goal).


In summation, there seems much ground to believe that two attentional processes in working memory /learning and performance are involved – one ACC based and the other PFC based and that they are explained in terms of pre-stimulus goal maintenance and post-stimulus response selection / biasing.

Sphere: Related Content

Wednesday, October 04, 2006

psychology, lies and videotapes

Ok. I have been tagged. So have to make up some nice sounding traits about myself that would endear me more to my readers :-)


New rules of this tag:
1. Name the person who tagged you.
2. Mention 9 things about you.
3. Tag 6 people.

I. I was tagged by Archana Bahuguna, an old time dear friend.

II. The nine things about me, in no particular order or importance, are:

  1. Despite my scientific inclinations and a healthy skepticism, I have explored, and still keep dabbling in esoteric occult subjects like tarot and astrology. I confess to possessing a tarot deck and I used to do light-hearted predictions for my friends while in college (mostly fooling them by telling them what they wanted to hear :-) . At one time I seriously believed in Nostradamus's predictions of an impending third world war and wanted to do my bit to prevent the catastrophe!! Now, hopefully, I have become more reasonable and rigorously scientific-minded, but am still intrigued by the power of these occult sciences to hold ground through the years and thus the fascination still remains.
  2. I am one of the silent types and make a poor conversationalist. Over the years I have realized that the best way to hide your foolishness is by keeping mum. Thus, I have to be literally prodded to engage in everyday small talk. The upside of this is that my friends presume that whenever I do manage to find something to say in a conversation, it would be bound to be profound or meaningful!!
  3. I'm a financially naive/careless person. I have never managed (or attempted) to make money from money. I do earn well, but I rarely invest that money in money generating instruments like shares or properties. I rationalize this by assuring myself that this lack of financial savvy is due to my professed disregard and antipathy of the capitalist system (emphasizing capital's role over everything else) as the best possible system one could have. I also end up paying more taxes than could have been legally saved using tax-incentive schemes and rationalize this as doing my bit to help the underprivileged.
  4. I like to take calculated risks. I like to explore the latent abilities that I either fear to possess or reasonably hope to develop; and to optimally balance my tangential interests and activities with a core moolah generating activity, so as to not end up with a feeling of missed opportunities or a wasted life/talent. Some might say that this is just a propensity towards listlessness and a misguided sense of heroism arising from starting life all over again, but that doesn’t deter me from trying my hands on something new and failing once more!!
  5. I am the studious, non-athletic sort of person. I rarely work out and am too lazy/unmotivated to even go for a regular morning walk. Despite an acute realization of the tremendous ill-effect my lack of physical exertion may have on my physical well being, I somehow never manage to place the body over mind. All the free time is either spent in mental wanderings and pursuits, in passive entertainment or in playing with my eight month old kid - only the last providing some physical activity.
  6. I like to think of myself as a spiritual person (whatever that means). I concur with Voltaire that if god does not exist, he has to be created. I strongly believe in evolution, but also believe in a higher purpose to life than mere survival, reproduction or increasing inclusive fitness. I believe Morality evolves, Choice evolves and as humans we have evolved to a stage where we have to take responsibility for ourselves as well as others. In this sense, I agree most strongly with the existential school of thought whereby we are responsible for giving essence (or meaning) to our existence. Here too, I am mostly spiritual in the analytical sense and like to focus on right actions as opposed to other experiential forms of enhancing spirituality like meditation or mindfulness.
  7. I am fascinated by movies, literature, mythology, art, music and the myriad ways in which the memes/ archetypes originate, replicate and survive in popular culture and the collective unconscious. I prefer aesthetic over utilitarian concerns and believe in the make-believe power of fabricated reality to take care of many of the pressing utilitarian needs. To provide meaning to a person, in some cases, may be more important, than providing food.
  8. I believe in the power of the ordinary, rather than the spectacle of the extraordinary. A culture that needs heroes is a potentially sick culture. A culture that doesn’t have room for those lagging behind, either due to differential abilities or circumstances, is a sick culture. A kind word or gesture, a caring in relationships, a sharing of resources (however limited) and a touching of someone else's life for betterment- all everyday acts one can easily indulge in- are equally, if not more, important than say making a once-in-lifetime dramatic new technological or scientific innovation that may be put to good use. One's goodness must reflect in everyday acts and the culture such that it values these everyday acts of heroism and goodness by the ordinary people.
  9. I sometimes lie, mostly passively, by not volunteering adverse information about myself. I am not someone who values absolutely or is adamant about the absoluteness of Truth. I believe in creating a fabricated reality if that serves a good purpose. I prefer to lie as infrequently as possible, but as I am a creative writer and have often managed to create decent poetry or prose by generously mixing (autobiographical) fact with fiction, I don't mind putting a spin on presented information, or selectively presenting information that I want. (This does not apply to my scientific blogging - I do try to be objective and truthful there). So, take the above revelations about myself with a pinch of salt!!

Tagging 6 people is the most difficult part. I'm not sure how many of them are going to respond (as my blogosphere consists entirely of psychologists who do not generally blog about personal stuff), but let me try.

I tag the Neurophilosopher, Shelley at the Retrospectacle, Chris at Developing Intelligence, Jake at Pure Pedantry, The Neurocritic and Mary at The Thinking Meat.


Anyone else from the readers, is welcome to get tagged and do leave your URL back in the comments so that the tagging can be traced!

Sphere: Related Content

Tuesday, October 03, 2006

The Blogger SAT challenge: Read my entry and tell me the score!


I recently took the blogger SAT challenge and though I thought I had done fairly well, came to know that I scored the median score of 3 as judged by the experts. This is way below the perfect 6 I received in Analytical writing section of GRE couple of years back.

So definitely, practice and preparation does matter a lot.

The blog also has a facility for readers to score the essay (and here 2 readers to date have been kind enough to give me a 6), so read on my entry and please give me some encouraging scores/ comments!

Sphere: Related Content

Monday, October 02, 2006

Synapse vol 1, issue 8 now online

Another stimulating edition of the Synapse is available, this time hosted at the Mind Hacks and containing entries by some newcomers in the Synaptic world. Do check out and have a good time.

Sphere: Related Content