The latest edition of Encephalon is now up at the Mind Hacks blog. It is a very good collection of neuro articles and there is lot of good stuff to drool at.
I especially liked the Neurocritic article on correlation between the spontaneous activity in fMRIs and slow wave EEG signals- we know that it is an important phenomenon, but what all this spontaneous activity signifies is still unclear. I also like Pure Pedantry commentary on the finding that tow subregions of dlPFC are implicated in hypothesis-generation. He raises important points regarding what three conditions a brain area should show before we jump to concluding that that area is indeed responsible for a particular function.
There is plenty of other interesting stuff including A Michael Posner interview, a report on selectively erasing memories in mice and a controversial post on whether more gesture usage implies slower linguistic learnings and capabilities in children; so head on to the Encephalon and get your kicks!
Wednesday, October 29, 2008
The latest edition of Encephalon is now up at the Mind Hacks blog. It is a very good collection of neuro articles and there is lot of good stuff to drool at.
Friday, October 24, 2008
I normally do not like to thrash articles or opinion pieces, but this article by Michael Shermer, in the Scientific American, has to be dealt with as it as masquerading as an authoritative debunking by one of the foremost skeptics in one of the most respected magazines. Yet, it is low on science and facts and is more towards opinions, biases and prejudices.
Shermer, from the article seems to be generally antagonistic to stage theories as he thinks they are mere narratives and not science. The method he goes about discrediting stage theories is to lump all of them together (from Freud's' theories to Kohlberg's theories), and then by picking up on one of them (the stages of grief theory by Kubler-Ross) he tries to discredit them all. This is a little surprising. While I too believe (and it is one of the prime themes of this blog) that most of the stage theories have something in common and follow a general pattern, yet I would be reluctant to club developmental stage theories that usually involve stages while the child is growing; to other stage theories like stages of grief, in which no physical development is concurrent with the actual stage process, but the stages are in adults that have faced a particular situation and are trying to cope with that situation. In the former case the children are definitely growing and their brains are maturing and their is a very real substrate that could give rise to distinctive stages; in the latter case the stages may not be tied so much to development of the neural issue; as much they are to its plasticity; the question in latter case would be viz does the brain adapt to losses like a catastrophic news, death of loved one etc by reorganizing a bit and does the reorganizing happen in phases or stages. The two issues of childhood development and adult plasticity are related , but may be different too. With adult neurogenisis now becoming prominent I wont be surprised if we find neural mechanisms for some of these adult stages too, like the stages of grief, but I would still keep the issues different.
Second , assuming that Shermer is right and that at the least the stage theory of grief, as proposed by Kubler-Ross is incorrect; and also that it can be clubbed with other stage theories; would it be proper to conclude that all stage theories were incorrect based on the fact that one of them was incorrect/ false. It would be like that someone proposed a modular architecture of mind; and different modules for mind were proposed accordingly; but on of the proposed modules did not stood the scrutiny of time( lets say a module for golf-playing was not found in the brain); does that say that all theories that say that the brain is organized modularity for at least some functions are wrong and all other modules are proved non-existent. Maybe the grief stages theory is wrong, but how can one generalize from that to all developmental stage theories, many of them which have been validated extensively (like Paiget's theories) and go on a general rant against all things 'stages'!!
Next let me address another fallacy that Shermer commits; the causal analogy fallacy: that if two things are analogous than one thing is causing other , when in fact no directional inference can be drawn from the analogical space. He asserts that humans are pattern-seeking, story-telling primates who like to explain away there experiences with stories or narratives especially as it provides a structure over unpredictable and chaotic happenings. Now, I am all with Shermer up till this point and this has been my thesis too; but then he takes a leap here and says that this is the reason we come up with stage theories. Why 'stage' theories? Why not just theories? any theory, in as much as it is an attempt to provide a framework for understanding and explication is a potential narrative and perhaps anyone that tries to come up with a theory is guilty of story-telling by extension. The leap he is making here, is the assumption that story-telling is a 'stage' process and a typical story follows a pattern, which is, unfolding of plot in distinct stages.
Now, I agree with the leap too that Shermer is making- a narrative is not just any continuous thread of yarn that the author spins- it normally involves discrete stages and though I have not touched on this before, Christopher Brooks work that delineated the eight basic story plots also deals with the five -stage unfolding of plot in all the different basic story plots. so I am not contesting the fact that story-telling is basically a stage process with distinct stages through which the protagonist pass or distinct stages of plot development; what I am contesting is the direction of causality. Is it because we have evidence of distinct stages in the lives of individuals, and in general, evidence for the eight-fold or the five-fold stages of development of various faculties, that our stories reflect distinct stages as they unfold and the mono myth has a distinct stage structure; or is it because our stories have structures in the form of stages, that the theories we develop also have stages? I believe that some theorizing in terms of stages may indeed be driven by our desire to compartmentalize everything into eight or so basic stages and environmental adaptive problems we have encountered repeatedly and which have become part of our mythical narrative structure; but most parsimoniously or mythical narrative structure is stage bound, as we have observed regularities in our development and life that can only be explained by resorting to discrete stages rather than a concept to continuous incremental improvement/ development/ unfolding.
Before moving on, let me just give a brief example of the power of stage theories and how they can be traced to neural mechanisms. I'll be jumping from the very macro phenomenon I have been talking about to the very micro phenomenon of perception. One can consider the visuomotor development of a child. Early in life there is a stage when the oculomotor control is mostly due to sub cortical regions like superior colliculus and the higher cortical regions are not much involved (they are not sufficiently developed/ myelinated) . The retina of eye is such that the foveal region is underdeveloped; and all this combination means that infants are very good at orienting their eyes to moving targets in their peripheral vision, but are poor at colour and form discrimination. Also, they can perform saccades first, the capability to make antisaccades develops next and the capacity to make smooth pursuit movement comes later. There are distinct stages of oculomotor control that a child can move through and this would definitely affect its perception of the world. (for example on can recognize an disicrimintae based on form first and color later as the visual striated areas for these mature in that order. In sort, there are strong anatomical, physiological and psychological substrates for most of the developmental stage theories.
Now let me address, why Shermer, whom I normally admire, has taken this perverse position. It is because his Skeptic magazine recently published an article by Russell P. Friedman, executive director of the Grief Recovery Institute in Sherman Oaks, Calif. (www.grief-recovery.com), and John W. James, of The Grief Recovery Handbook (HarperCollins, 1998), which tried to debunk an article published by JAMA that found support for the five stage grief theory. Now, that Skeptic article had received a well-deserved thrashing by some reputed blogs, see this world Of Psychology post that exposes many of the holes in Friedman and James' argument, so possibly out of desperation Shermer though why not settle the scores and expose all stage theories as pseudoscience. Unfortunately he fails miserably in defending his publication and we have seen above why!
Now, let us come to the meat of the controversy: the stages of grief theory of Kubler-Ross for which the Yale group found evidence and which the Skeptics didn't like and found the evidence worth criticizing. I have read both the original JAMA paper and the skeptic article and see some merits to both side. In fact I guess the stance that Friedman et al have taken I even agree with to an extent, especially their decoupling of stages of grief from stages of dying person/ stages of adjustment to catastrophic death. Some excerpts:
IN 1969 THE PSYCHIATRIST ELIZABETH KÜBLER-ROSS wrote one of the most influential books in the history of psychology, On Death and Dying. It exposed the heartless treatment of terminally-ill patients prevalent at the time. On the positive side, it altered the care and treatment of dying people. On the negative side, it postulated the now-infamous five stages of dying—Denial, Anger, Bargaining, Depression, and Acceptance (DABDA), so annealed in culture that most people can recite them by heart. The stages allegedly represent what a dying person might experience upon learning he or she had a terminal illness. “Might” is the operative word, because Kübler-Ross repeatedly stipulated that a dying person might not go through all five stages, nor would they necessarily go through them in sequence. It would be reasonable to ask: if these conditions are this arbitrary, can they truly be called stages?
Many people have contested the validity of the stages of dying, but here we are more concerned with the supposed stages of grief which derived from the stages of
During the 1970s, the DABDA model of stages of dying morphed into stages of grief, mostly because of their prominence in college-level sociology and psychology courses. The fact that Kübler-Ross’ theory of stages was specific to dying became obscured.
Prior to publication of her famous book, Kübler-Ross hypothesized the Five Stages of Receiving Catastrophic News, but in the text she renamed them the Five Stages of Dying or Five Stages of Death. That led to the later, improper shift to stages of grief. Had she stuck with the phrase catastrophic news, perhaps the mythology of stages wouldn’t have emerged and grievers wouldn’t be encouraged to try to fit their emotions into non-existent stages.
I wholeheartedly concur with the authors that it is not good to confuse stages that a dying person may go through on receiving catastrophic death of terminal illness, with grief stages that may follow once one has learned of a loss and is coping with the loss(death of someone, divorce of parents etc); in the first case the event that is of concern is in the future and would lead to different tactics, than for the latter case when the event is already in the past and has occurred. thus, as rightly pointed by the authors, denial may make sense for dying people - 'the diagnosis is incorrect, I am not going to die; I have no serious disease.'; denial may not make sense for a loos of a loved one by death, as the vent has already happened and only a very disturbed and unable to cope person would deny the factuality of the event (death). but this is a lame point; in grief (equated with loss of loved one), they stage can be rightly characterized as disbelief/dissociation/isolation, whereby one would actively avoid all thoughts of the loved one's non-existence and come up with feelings like 'I still cannot believe that my mother is no longer alive' . Similarly My personal view is that while anger and energetic searching of alternatives may be the second stage response to catastrophic prospective forecast; the second stage response to a catastrophic news (news of a loss of loved one) would be more characterized by energized yearning for the lost one and an anger towards the unavoidable circumstances and the world in general that led to the loss.
The third stage is particularly problematic; in dying people it makes perfect sense to negotiate and bargain, as the event has not really happened ('I'll stop sinning, take away the cancer); but as rightly pointed out by the authors it doesn't make sense for events that have already happened.while many authoritative people have substituted yearning for the third stage in case of grief , I would propose that we replace that with regret or guilt. I know this would be controversial; but the idea is a bargaining of past events like 'God, please why didn't you take my life, instead of my young son' ; it doesn't make sense but is a normal stage of grieving - looking for and desiring alternative bad outcomes ('I wish I was dead instead of him'. The other two stages depression and acceptance do not pose as much problems, so I'll leave them for now. suffice it to say that becoming depressed / disorganized and then recovering/ becoming reorganized are normal stages that one would be expected to go through.
What I would now return is to their criticism of Kubler-Ross. They first attack her saying her evidence was anecdotal and based on personal feelings then , instead of correcting this gross error and themselves providing statistical and methodological research results, present anecdotal evidence based on their helping thousands of grieving persons.
Second they claim, that this stage based theories cause much harm; but I am not able to understand why a stage based theory must cause harm and , for all their good intentions, I think they are seriously confused here. On the one hand they claim (for eg in depression section) that stages lead to complacency:
It is normal for grievers to experience a lowered level of emotional and physical energy, which is neither clinical depression nor a stage. But when people believe depression is a stage that defines their sad feelings, they become trapped by the belief that after the passage of some time the stage will magically end. While waiting for the depression to lift, they take no actions that might help them.
and on the other hand they claim that labeling something causes over reactivity and over treatment:
When medical or psychological professionals hear grievers diagnose themselves as depressed, they often reflexively confirm that diagnosis and prescribe treatment with psychotropic drugs. The pharmaceutical companies which manufacture those drugs have a vested interest in sustaining the idea that grief-related depression is clinical, so their marketing supports the continuation of that belief. The question of drug treatment for grief was addressed in the National Comorbidity Survey published in the Archives of General Psychiatry,Vol. 64, April, 2007). “Criteria For Depression Are Too Broad Researchers Say—Guidelines May Encompass Many Who Are Just Sad.” That headline trumpeted the survey’s results, which observed more than 8,000 subjects and revealed that as many as 25% of grieving people diagnosed as depressed and placed on antidepressant drugs, are not clinically depressed. The study indicated they would benefit far more from supportive therapies that could keep them from developing full-blown depression.
Now, I am not clear what the problem is - is it complacency or too much concerns and over-treatment. And this argument they keep on repeating and hammering down - that stages do harm as them make people complacent that thing swill get better on its own and no treatment is needed. I don't think that is a valid assumption, we all know that many things like language develop, but their are critical times hen interventions are necessary for proper language to develop; so too is the case with grieving people, they would eventually recover, but they do need support of friends and family and all interventions, despite this being 'just a phase'. I don't think saying that someone would statistically go away in a certain time-period eases the effects one if feeling of the phenomenon right now. An analogy may help. It is statistically true, that on an average, within six months a person would get over his most recent breakup and start perhaps flirting again; that doesn't subtract from the hopelessness and feelings of futility he feels on teh days just following the breakup and most of the friends and family do provide support even though they know that this phase will get over. Same is true for other stages like stages of grief and the concerns of authors are ill-founded.
The concerns of the author that I did feel sympathetic too though was the stage concept being overused in therapy and feelings like guilt being inadvertently implanted in the clients by the therapists.
Grieving parents who have had a troubled child commit suicide after years of therapy and drug and alcohol rehab, are often told, “You shouldn’t feel guilty, you did everything possible.” The problem is that they weren’t feeling guilty, they were probably feeling devastated and overwhelmed, among other feelings. Planting the word guilt on them, like planting any of the stage words, induces them to feel what others suggest. Tragically, those ideas keep them stuck and limit their access to more helpful ideas about dealing with their broken hearts.
Therapists have to be really careful here and not be guided by pre-existing notions of how the patient is feeling. they should listen to the client and when in doubt ask questions, not implicitly suggest and assume things. That indeed is a real danger.
Lastly the criticism of stages/ common traits vs individual differences and uniqueness have to be dealt with. the claim that each grieves uniquely is not a novel claim and I do not find it lacking in evidence too. It is tautological. But still some common patterns can be elucidated and subsumed under stages. These stages are the 'normal' stages with enough room for individual aberration . I think there has to be more tolerance and acceptance of the 'abnormal' in general - if someone directly accepts and never feels and denial he too is abnormal - but one we readily accept as a resilient persons; the other who gets stuch at denial has to be shown greater care and hand-holded through the remaining stages to come to acceptance.
In the end I would like to briefly touch on the Yale study that reignited this controversy. Here is the summary of An Empirical Examination of the Stage Theory of Grief by Paul K. Maciejewski, PhD; Baohui Zhang, MS; Susan D. Block, MD; Holly G. Prigerson, PhD.
Context The stage theory of grief remains a widely accepted model of bereavement adjustment still taught in medical schools, espoused by physicians, and applied in diverse contexts. Nevertheless, the stage theory of grief has previously not been tested empirically.
Objective To examine the relative magnitudes and patterns of change over time postloss of 5 grief indicators for consistency with the stage theory of grief.
Design, Setting, and Participants Longitudinal cohort study (Yale Bereavement Study) of 233 bereaved individuals living in Connecticut, with data collected between January 2000 and January 2003.
Main Outcome Measures Five rater-administered items assessing disbelief, yearning, anger, depression, and acceptance of the death from 1 to 24 months postloss.
Results Counter to stage theory, disbelief was not the initial, dominant grief indicator. Acceptance was the most frequently endorsed item and yearning was the dominant negative grief indicator from 1 to 24 months postloss. In models that take into account the rise and fall of psychological responses, once rescaled, disbelief decreased from an initial high at 1 month postloss, yearning peaked at 4 months postloss, anger peaked at 5 months postloss, and depression peaked at 6 months postloss. Acceptance increased throughout the study observation period. The 5 grief indicators achieved their respective maximum values in the sequence (disbelief, yearning, anger, depression, and acceptance) predicted by the stage theory of grief.
Conclusions Identification of the normal stages of grief following a death from natural causes enhances understanding of how the average person cognitively and emotionally processes the loss of a family member. Given that the negative grief indicators all peak within approximately 6 months postloss, those who score high on these indicators beyond 6 months postloss might benefit from further evaluation.
I believe they have been very honest with their data and analysis. They found peak of denial, yearning, anger , depression and acceptance in that order. I belie they could have clubbed together anger and yearning together as the second stage as this study dealt with stages of grief and not stages of dying and should have introduced a new measure of regret/guilt and I predict that this new factors peak would be between the anger/yearning peak and depression peak.
Thus, to summarize, my own theory of grief and dying (in eth eight basic adaptive problems framework) are :
Stage theory of dying (same as Kubler-Ross):
- Denial: avoiding the predator; as the predator (death ) cannot be avoided , it is denied!!
- Anger/ Searching: Searching for resources; an energetic (and thus partly angry)efforts to find a solution to this over looming death; belief in pseudo-remedies etc.
- Bargaining/ negotiating: forming alliances and friendships: making a pact with the devil...or the God ...that just spare me this time and I will do whatever you want in future.
- Depression: parental investment/ bearing kids analogy: is it worth living/ bringing more people into this world?
- Acceptance: helping kin analogy: The humanity is myself. even if I die, I live via others.
Stage theory of grief (any loss especially loss of a loved one)
- Disbelief: Avoiding the predator (loss) . I cant believe the loss happened. Let me not think about it.
- Anger/ Yearning: Energetic search for resources (reasons) . Why did it happen to me; can the memories and yearning substitute for the loved one?
- Bargaining/ regret/ guilt: forming alliances and friendships: Could this catastrophe be exchanged for another? could I have died instead of him?
- Depression: parental investment/ bearing kids analogy : is it worth living/ bringing more people into this world?
- Acceptance: helping kin analogy: Maybe I can substitute the lost one with other significant others? Maybe I should be thankful that other significant persons are still there and only one loss has occurred.
Do let me know your thoughts on this issue. I obviously being a researcher in the stages paradigm was infuriated seeing the Shermer article,; others may have more balanced views. do let me know via comments, email!!
Paul K. Maciejewski, PhD; Baohui Zhang, MS; Susan D. Block, MD; Holly G. Prigerson, PhD (2007). An Empirical Examination of the Stage Theory of Grief JAMA, 297 (7), 716-723 Sphere: Related Content
Wednesday, October 22, 2008
It has been my long standing thesis that Autism and Schizophrenia are opposite poles on a continuum; and the most recent evidence I would like to allude to is the mini columnar structure and abnormalities associetd with it in both the diseases.
It has been postulated that the prefrontal cortices of schizophrenic patients have significant alterations in their interneuronal (neuropil) space. The present study re-examines this finding based on measurements of mean cell spacing within the cell minicolumn. The population studied consisted of 13 male schizophrenic patients (DSM-IV criteria) and 13 age-matched controls. Photomicrographs of Brodmann's areas 9, 4 (M1), 3b (S1), and 17 (V1) were analyzed with computerized image analysis to measure parameters of minicolumnar morphometry, i.e., columnarity index (CI), minicolumnar width (CW), dispersion of minicolumnar width (VCW), and mean interneuronal distance (MCS). The results indicate alterations in the mean cell spacing of schizophrenic patients according to both the lamina and cortical area examined. The lack of variation in the columnarity index argues in favor of a defect postdating the formation of the cell minicolumn.
The modular arrangement of the neocortex is based on the cell minicolumn: a self-contained ecosystem of neurons and their afferent, efferent, and interneuronal connections. The authors' preliminary studies indicate that minicolumns in the brains of autistic patients are narrower, with an altered internal organization. More specifically, their minicolumns reveal less peripheral neuropil space and increased spacing among their constituent cells. The peripheral neuropil space of the minicolumn is the conduit, among other things, for inhibitory local circuit projections. A defect in these GABAergic fibers may correlate with the increased prevalence of seizures among autistic patients. This article expands on our initial findings by arguing for the specificity of GABAergic inhibition in the neocortex as being focused around its mini- and macrocolumnar organization. The authors conclude that GABAergic interneurons are vital to proper minicolumnar differentiation and signal processing (e.g., filtering capacity of the neocortex), thus providing a putative correlate to autistic symptomatology.
D Buxhoeveden (2000). Reduced interneuronal space in schizophrenia Biological Psychiatry, 47 (7), 681-682 DOI: 10.1016/S0006-3223(99)00275-9
M CASANOVA, L DEZEEUW, A SWITALA, P KRECZMANSKI, H KORR, N ULFIG, H HEINSEN, H STEINBUSCH, C SCHMITZ (2005). Mean cell spacing abnormalities in the neocortex of patients with schizophrenia Psychiatry Research, 133 (1), 1-12 DOI: 10.1016/j.psychres.2004.11.004
Manuel F. Casanova, Daniel Buxhoeveden, Juan Gomez (2003). Disruption in the Inhibitory Architecture of the Cell Minicolumn: Implications for Autisim The Neuroscientist, 9 (6), 496-507 DOI: 10.1177/1073858403253552 Sphere: Related Content
Last week, on the blog action day, I re posted one of my earlier posts that questioned Kanazawa's assertion that IQ causes Longevity (and implicitly that low IQ causes Poverty and not the other way round) and that SES has no effect on longevity net of IQ. That has been thoroughly dealt with earlier and I will not readdress the issue; suffice it to say that I believe (and think that I have evidence on my side) that shows that in low SES conditions, a Low SES does not lead to full flowering of genetic Intelligence potential and is thus a leading cause of low IQ amongst low SES populations. This Low IQ that is a result of Low SES also gets correlated to longevity; again which would be largely explained by the low SES of the person. But as Low SES leads to less longevity and less IQ , a correlation between IQ and Longevity would also be expected.
Evidence is reviewed pointing to a negative relationship between intelligence and religious belief in the United States and Europe. It is shown that intelligence measured as psychometric g is negatively related to religious belief. We also examine whether this negative relationship between intelligence and religious belief is present between nations. We find that in a sample of 137 countries the correlation between national IQ and disbelief in God is 0.60.
Richard Lynn, John Harvey, Helmuth Nyborg (2008). Average Intelligence Predicts Atheism Rates across 137 Nations Intelligence
J. A. Whitson, A. D. Galinsky (2008). Lacking Control Increases Illusory Pattern Perception Science, 322 (5898), 115-117 DOI: 10.1126/science.1159845 Sphere: Related Content
Tuesday, October 21, 2008
Those of you who are veterans in the cognitive blogosphere would remember the excellent AlphaPsy blog and how it had suddenly stopped posting and sort of 'died'. The team, including Olivier and Hugo have now come back in their second reincarnation as a newly launched cognition and culture website with a blog and a news section. I am excited and looking forward to reading some good stuff. Do sample the blog and I am sure you will be happy to add it to your blogrolls.Sphere: Related Content
Monday, October 20, 2008
The Institute Of Psychiatry, London conducts Madusley debates on relevant psychiatric topics between distinguished psychiatrists and neuroscientists and also publishes them as a podcast. The most recent such debate consisted of the issue of whether anti-depressants are any better than Placebos in treating depression. There were knowledgeable arguments on both fronts and no matter what position you hold, hearing the debate would definitely enhance your knowledge about the issues involved.
I, for one, did not knew that anti-depressants worked by addressing the automatic and unconscious attention/ perception and memory biases. While I was aware that CBT worked top-down and affected cognitive biases and brain regions different from that areas affected by anti-depressants that presumably worked on neurotransmitter levels and bottom-up, the revelation that Goodwin's team had found that anti-depressants too work on biases, but unconscious ones, while CBT works on conscious ones, was new and enriching.
On the other hand I agree with many of the methodological issues raised by the speakers who claimed that anti-depressants were no good than placebos : the fact that the results lack 'clinical significance'; being psycho-active they are bound to have some effects and also the fact that the relief may be symptomatic due to 'drug' nature of anti-depressants and not specific and addressing the underlying disease, that the scale (HRSD) measuring depression may be not reflective of DSM criterion and may not be the best measure of disease severity; and I concur, but still think that the current generation of anti-depressants (other medicines) must be some good (over and above the good they bring by way of Placebo effect) especially since research has shown how they work (with a lag of few weeks before showing effects and by primarily inducing neurogenesis and affecting discrete brain areas) and how they are indeed effective at least in severely depressed people. Still all this should be taken with a pinch of salt- we have continuously been replacing outdated models of depression (like serotonin deficiency) by more and more accurate models (like neurogenesis). In my view we need to persist in that direction, though also having a healthy skepticism of what the drug companies might say and market new drugs and models for. Fortunately there are a host of unbiased pharmacists, neuroscientist and psychiatrist out there who are struggling with finding the most accurate model and the most accurate medication/ treatment like CBT for the same; so we don't need to despair. However, to blindly accepted all drugs (and models) , marketed by the Big Pharma, at their face value, and in clear evidence that they have not been proved effective beyond doubt, in clear evidence that negative finding have not been reported diligently and in view of the fact that at many time side -effects are glossed over, I would request not to be seduced overtly by the anti-depressants efficacy hype, but to moderate that with other known efficacious manes like exercise, CBT and yoga (all of which may be working by placebo effect themselves, but which definitely have lesser or no side-effects than anti-depressants. this of course does'nt mean that you give up your medicenes- at least not without consulting your psychiatrist- but supplementing them with other non-drug measures and reducing your reliance on the drugs- as they definitely have side-effects and may not be that efficacious as depicted in advertisements/ popular press.
Here is the summary of the talk from the IoP website:
Inspired by the recent media-frenzy at Prof Irving Kirsch?s research which suggested that antidepressants are no better than placebo, this Maudsley debate had an extremely good turnout.
Professor Kirsch gave us a run through of his research, in which he claimed to have found that there was a statistically significant benefit in the use of SSRIs over placebo - but that the difference was smaller than the standard of ?clinical significance? set down by the UK?s National Institute for Clinical Excellence (NICE) for all but the most depressed patients. His team also found that patients? response to placebo across all the trials was ?exceptionally large? - an indication of the complexity of the disorder. It was only the fact that the most severely depressed patients showed a much lower response to placebo that made the drug response clinically significant in this group of patients.
Against the motion, Professor Guy Goodwin argued that there were crucial flaws to the bounds that Kirsch had used to define clinical effectiveness. He pointed out that these criteria fail to contain an accurate description of depression, for example that they fail to mention persistent negative thoughts and other crucial symptoms that would be included in DSM IV.
For the motion, Dr Joanna Moncrieff alluded to the idea that there may be some sort of conspiracy of complacency and wishful thinking within the psychiatric profession as to the effectiveness of anti-depressants.
An impassioned speech against the motion was then given by Prof Lewis Wolpert. This was inspired by his own experiences of depression, which proved a powerful persuader as to the place that anti-depressants have in the treatment of severe depression.
Prior to the debate the audience were asked to vote which side of the argument they favoured. The leaning was overwhelmingly against the motion, perhaps not surprising in a room full of psychiatrists! After the speakers had made their points votes were recounted and a minority had changed their minds and had been swayed to support the motion. However those against the motion still had the majority.
The original article that sparked this debate is available online at PLOS Medicine, and I'm including the editor's summary below:
Everyone feels miserable occasionally. But for some people—those with depression—these sad feelings last for months or years and interfere with daily life. Depression is a serious medical illness caused by imbalances in the brain chemicals that regulate mood. It affects one in six people at some time during their life, making them feel hopeless, worthless, unmotivated, even suicidal. Doctors measure the severity of depression using the “Hamilton Rating Scale of Depression” (HRSD), a 17–21 item questionnaire. The answers to each question are given a score and a total score for the questionnaire of more than 18 indicates severe depression. Mild depression is often treated with psychotherapy or talk therapy (for example, cognitive–behavioral therapy helps people to change negative ways of thinking and behaving). For more severe depression, current treatment is usually a combination of psychotherapy and an antidepressant drug, which is hypothesized to normalize the brain chemicals that affect mood. Antidepressants include “tricyclics,” “monoamine oxidases,” and “selective serotonin reuptake inhibitors” (SSRIs). SSRIs are the newest antidepressants and include fluoxetine, venlafaxine, nefazodone, and paroxetine.
Why Was This Study Done?
Although the US Food and Drug Administration (FDA), the UK National Institute for Health and Clinical Excellence (NICE), and other licensing authorities have approved SSRIs for the treatment of depression, some doubts remain about their clinical efficacy. Before an antidepressant is approved for use in patients, it must undergo clinical trials that compare its ability to improve the HRSD scores of patients with that of a placebo, a dummy tablet that contains no drug. Each individual trial provides some information about the new drug's effectiveness but additional information can be gained by combining the results of all the trials in a “meta-analysis,” a statistical method for combining the results of many studies. A previously published meta-analysis of the published and unpublished trials on SSRIs submitted to the FDA during licensing has indicated that these drugs have only a marginal clinical benefit. On average, the SSRIs improved the HRSD score of patients by 1.8 points more than the placebo, whereas NICE has defined a significant clinical benefit for antidepressants as a drug–placebo difference in the improvement of the HRSD score of 3 points. However, average improvement scores may obscure beneficial effects between different groups of patient, so in the meta-analysis in this paper, the researchers investigated whether the baseline severity of depression affects antidepressant efficacy.
What Did the Researchers Do and Find?
The researchers obtained data on all the clinical trials submitted to the FDA for the licensing of fluoxetine, venlafaxine, nefazodone, and paroxetine. They then used meta-analytic techniques to investigate whether the initial severity of depression affected the HRSD improvement scores for the drug and placebo groups in these trials. They confirmed first that the overall effect of these new generation of antidepressants was below the recommended criteria for clinical significance. Then they showed that there was virtually no difference in the improvement scores for drug and placebo in patients with moderate depression and only a small and clinically insignificant difference among patients with very severe depression. The difference in improvement between the antidepressant and placebo reached clinical significance, however, in patients with initial HRSD scores of more than 28—that is, in the most severely depressed patients. Additional analyses indicated that the apparent clinical effectiveness of the antidepressants among these most severely depressed patients reflected a decreased responsiveness to placebo rather than an increased responsiveness to antidepressants.
What Do These Findings Mean?
These findings suggest that, compared with placebo, the new-generation antidepressants do not produce clinically significant improvements in depression in patients who initially have moderate or even very severe depression, but show significant effects only in the most severely depressed patients. The findings also show that the effect for these patients seems to be due to decreased responsiveness to placebo, rather than increased responsiveness to medication. Given these results, the researchers conclude that there is little reason to prescribe new-generation antidepressant medications to any but the most severely depressed patients unless alternative treatments have been ineffective. In addition, the finding that extremely depressed patients are less responsive to placebo than less severely depressed patients but have similar responses to antidepressants is a potentially important insight into how patients with depression respond to antidepressants and placebos that should be investigated further.
Irving Kirsch, Brett J. Deacon, Tania B. Huedo-Medina, Alan Scoboria, Thomas J. Moore, Blair T. Johnson (2008). Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration PLoS Medicine, 5 (2) DOI: 10.1371/journal.pmed.0050045 Sphere: Related Content
Friday, October 17, 2008
I had blogged previously regarding the obesity and dopamine connection and how obese people have been shown to have lesser dopamine receptors in the brain. The studies till now were correlational and there was a possibility that eating more food and the resultant obesity may be a cause and may play a part that decreases the dopamine receptors just like in addicted individuals(the addicted people also have less dopamine receptors).
However a new study by Stice and colleague in the latest Science edition has shown that the less number of dopamine receptors is a cause rather than a consequence of obesity. This they did by studying a genetic variation, A1 allele of the TaqIA restriction fragment length polymorphism , that leads to lesser number of dopamine receptors. They found that having this allele increased the risk for obesity and using FMRI they were also able to show that this was mediated by reduced dopamine signaling in the striatum.
Here is the abstract of the study:
The dorsal striatum plays a role in consummatory food reward, and striatal dopamine receptors are reduced in obese individuals, relative to lean individuals, which suggests that the striatum and dopaminergic signaling in the striatum may contribute to the development of obesity. Thus, we tested whether striatal activation in response to food intake is related to current and future increases in body mass and whether these relations are moderated by the presence of the A1 allele of the TaqIA restriction fragment length polymorphism, which is associated with dopamine D2 receptor (DRD2) gene binding in the striatum and compromised striatal dopamine signaling. Cross-sectional and prospective data from two functional magnetic resonance imaging studies support these hypotheses, which implies that individuals may overeat to compensate for a hypofunctioning dorsal striatum, particularly those with genetic polymorphisms thought to attenuate dopamine signaling in this region.
Now this should not be news to readers of this blog, because this was exactly what I had proposed in my earlier blog pots. Low number of dopamine receptors leading to lower dopamine rush, leading to overeating and obesity. It is heartening to see the same being confirmed; though we will need more evidence to settle the direction of causality.
E. Stice, S. Spoor, C. Bohon, D. M. Small (2008). Relation Between Obesity and Blunted Striatal Response to Food Is Moderated by TaqIA A1 Allele Science, 322 (5900), 449-452 DOI: 10.1126/science.1161550 Sphere: Related Content
Thursday, October 16, 2008
Till now, most of the research on learning at the molecular level or LTP/TLD has focused on classical conditioning paradigms. To my knowledge for the first time someone has started looking at whether , on the molecular level, classical conditioning , which works by associations between external stimuli, is differently encoded and implemented from operant learning , which depends on learning the reward contingencies of one's spontaneously generated behavior.
Bjorn Brembs and colleagues have shown that the normal learning pathway implicated in classical conditioning, which involves Rugbata gene in fruit fly and works on adenylyl cyclase (AC) , is not involved in pure operant learning; rather pure operant learning is mediated by Protein Kinase C (PKC) pathways. This is not only a path breaking discovery , as it cleary shows the double dissociation showing genetically mutant flies, it is also a marvelous example fo how a beautiful experimental setup was convened to separate and remove the classical conditioning effects from normal operant learning and generate a pure operant learning procedure. You can read more about the procedure on Bjorn Brembs site and he also maintains a very good blog, so check that out too.
Here is the abstract of the article and the full article is available at the Bjorn Brembs site.
Learning about relationships between stimuli (i.e., classical conditioning ) and learning about consequences of one’s own behavior (i.e., operant conditioning ) constitute the major part of our predictive understanding of the world. Since these forms of learning were recognized as two separate types 80 years ago , a recurrent concern has been the
issue of whether one biological process can account for both of them . Today, we know the anatomical structures required for successful learning in several different paradigms, e.g., operant and classical processes can be localized to different brain regions in rodents  and an identified neuron in Aplysia shows opposite biophysical changes after operant and classical training, respectively. We also know to some detail the molecular mechanisms underlying some forms of learning and memory consolidation. However, it is not known whether operant and classical learning can be distinguished at the molecular level. Therefore, we investigated whether genetic manipulations could differentiate between operant and classical learning in dorsophila. We found a double dissociation of protein kinase C and adenylyl cyclase on operant and classical learning. Moreover, the two learning systems interacted hierarchically such that classical predictors were learned preferentially over operant predictors.
Do take a look at the paper and the experimental setup and lets hope that more focus on operant learning would be the focus from now on and would lead to a paradigmatic shift in molecular neuroscience with operant conditioning results more applicable to humans than classical conditioning results, in my opinion.
B BREMBS, W PLENDL (2008). Double Dissociation of PKC and AC Manipulations on Operant and Classical Learning in Drosophila Current Biology, 18 (15), 1168-1171 DOI: 10.1016/j.cub.2008.07.041 Sphere: Related Content
Wednesday, October 15, 2008
Well, today is blog action day 2008, and the topic for today is Poverty.
I am afraid I will be posting one of my old posts today: a post relating Poverty and SES to IQ and I am also publishing some relevant comments as the comment length generally exceeded the article length:-):
The post, comments and my response to comments are as follows; I would love to rekindle debate on SES/Poverty and IQ again and am looking for more discussions. Also please check out this earlier post on the simillar poverty and IQ topic:
Original Post: Is low IQ the cause of income inequality and low life expectancy or is it the other way round?
As per this post from the BPS research digest, Kanazawa of LSE has made a controversial claim that economic inequality is not the cause of low life expectancy, but that both low life expectancy and economic inequality are a result of the low IQ of the poor people. The self-righteous reasoning is that people with low IQ are not able to adapt successfully to the stresses presented by modern civilization and hence perish. He thinks he has data on his side when he claims that IQ is eight times more strongly related to life expectancy, than is socioeconomic status. What he forgets to mention(or deliberately ignores) is growing evidence that IQ is very much determinant on the socioeconomic environment of its full flowering and a low IQ is because of two components- a low genetic IQ of parent plus a stunted growth of IQ/intelligence due to impoverished environment available because of the low socio-economic status of the parents.
A series of studies that I have discussed earlier, clearly indicate that in the absence of good socioeconomic conditions, IQ can be stunted by as large as 20 IQ points. Also discussed there, is the fact that the modern civilization as a whole has been successful in archiving the sate of socioeconomic prosperity that is sufficient for the full flowering of inherent genetic IQ of a child and as such the increments in IQ as we progress in years and achieve more and more prosperity (the Flynn effect) has started to become less prominent. This fact also explains the Kanazawa finding that in 'uncivilized' sub-Saharan countries the IQ is not related to life expectancy, but socio-economic status is. although, he puts his own spin on this data, a more parsimonious ( and accurate) reason for this is that in the sub-Saharan countries, even the well -of don't have the proper socio-economic conditions necessary for the full flowering of IQ and thus the IQ of both the well-off and poor parents in these countries is stunted equally. Thus, the well-off (which are not really that well-off in comparison to their counterparts in the western countries) are not able to be in any more advantageous position (with respect to IQ) than the poor in these countries. The resultant life expectancy effect is thus limited to that directly due to economic inequality and the IQ mediated effect of economic inequality is not visible.
What Kanazawa deduces from the same data and how he chooses to present these findings just goes on to show the self-righteous WASP attitude that many of the economists assume. After reading Freakonomics, and discovering how the authors twist facts and present statistics in a biased manner to push their idiosyncratic theories and agendas, it hardly seems surprising that another economist has resorted to similar dishonest tactics - shocking people by supposedly providing hard data to prove how conventional wisdom is wrong. Surprisingly, his own highlighting of sub-Saharan counties data that shows that life-expectancy is highly dependent on socio-economic conditions in these countries is highly suggestive of the fact that in cultures where the effects og economic inequality are not mediated via the IQ effects, economic inequality is the strongest predictor of low life expectancy.
Instead of just blaming the people for their genes/ stupidity, it would be better to address the reasons that lead to low IQs and when they are tackled, directly address the social inequality problem , as in the author's own findings, when IQ is not to blame for the low life expectancy, the blame falls squarely on economic inequality (as in the sub-Saharan countries data) .
First of all, I beg you pardon for my limited english.
I find quite interesting your findings. But there could be an issue which limits the reasoning: how the IQ is meassured? or what does it really meassures? Does it really defines how smart or clever a person is?
I think there must be a lot of denounces about it. So, I think it's important to recognize the limits of this aproach based on IQ meassurment limitants. Of course, there could be a reference in your and Kanazawa's articles (I have not seen none of them).
All of this is beacuse I have met childs quite smarts living in the poorest zones of my city (Bogotá,
Colombia), I would say all of them seems to be quite smart, at least form my point if view. They are all really quick undertanding abstract problems and linking things. I think they have a strong capability to analize any situation. So, if you are able to meassure their IQ using problems wich need, for instance, to apply Phitagora's theorem, surelly they will be in trouble. So I think education could explain better economic inequalities and, thus, low life expentacy.
I never have explored this issue, so I would thank you refering me to some relevant literature related. Even telling me if I am quite wrong or not.
Sandy G said...
I appreciate your thoughtful comments. It is true that intelligence consists of a number of factors (as large as 8-10 broad factors), and is also differentiated as crystallized(Gc) and fluid (Gf); but for most analysis a concept of a general underlying common factor , spearman's g, is taken as reflective of intelligence and measured by the IQ scores.
In this sense, IQ/g does reflect how clever or smart a person is, but success/outcome in life is affected by other factors like motivation, effort, creativity etc.
I agree that many children in impoverished environments are quite smart, but you would be surprised to discover how providing an enriched environment to them, at their critical developmental periods,would have resulted in lasting intelligence gains. They are smart, but could have been smarter, if they had the right socioeconomic environment. On the other hand, an average child from well-to-do family would be able to maximally develop its inherent capabilities and thus stand a stronger chance than the poor smart child, whose capabilities haven't flowered fully.
Cultural bias in IQ measures have been found in the past, but the field has vastly improved now and these biases are fast disappearing leading to more accurate and valid cross-cultural comparisons.
The key to remember here is that poor socio-economic condition affects longevity via multiple pathways- one of them is direct by limiting access to good health care and nutrition, but there are also indirect effects mediated by , as you rightly pointed, education (poor people get less education and not vice versa) and also intelligence.
Garett Jones said...
Two words: East Asia.
If bad social and economic outcomes were the key driver of low IQ, then we'd expect East Asians to have had low IQ's back when they were poor--say, back in the 50's and 60's. Check out Table 4 of my paper (page 28) to see if that's the case...
Guess not. So, East Asians have been beating Causasians on IQ tests (on average) for as far back as we have data. You can get more historical data along these lines from Lynn's (2006) book, Race Difference in Intelligence.
And one can go even further back if you look at brain size, which correlates about 0.4 with IQ. Asian brains have been well-known to be larger than Caucasian brains for as long as folks have been measuring both of them. Hard to fit that in with WASP-driven science...
So simple reverse causality surely plays some role, but it can't explain East Asia.....
Sandy G said...
Thanks for dropping by and commenting.
I guess we agree on more things, than we disagree on. For example, in section IID of your paper, you concur with my explanation of Flynn effect that it is most probably due to the increase in living conditions and due to environmental factors enabling the full flowering of potential. Environment can and does have a strong disruptive negative effect, though it only has a limited positive enabling effect (no amount of good environment can give you an intelligence that is disproportionate to what your genes endow on you; but even minor lack of right environmental inputs or toxins, can lead to dramatic stunted achievement of that potential intelligence).
Also, it is heartening to note, that early on in your paper you take the position that your paper will not settle genetic vs environmental debate on IQ, but would only provide evidence that national IQ is a good indicator of ntaional productivity.
I have no issue with the same and agree that if one disregards the process by which adult stable IQs are archived, then the stable adult IQ that has been archived would be a very good predictor of productivity and economic status (in a free market environment where other conditions re not adversely affecting success). There is no qualms with the causal relation between a better IQ leading to better SES, in a fair world.
What I do strongly disagree with is the assumption that low IQ is solely dependent on genetic factors. Bad socio-economic factors are the key drivers of low IQ- especially in situations where the socio-economic status is so low that it does'nt guarantee access to basic amenities of life like proper nutrition/ health care.
It is interesting to note that poor SES would cause stunted growth of IQ, and due to the causal relation between IQ and SES would lead to less productivity and lower income, thus maintaining or even aggravating the low SES. This is the downward vicious cycle from which it is very hard to emerge. This type of economy and culture would definitly have lower IQ than what could have been achieved in the right conditions. The sub-saharan countries that Kanazawa used in his study, match this pattern and some of the African countries National IQ (as per data appendix in your paper) viz. Kenya: 72, south afric: 72, ghana : 71 confirms to this pattern).
The opposite observation, that a spiraling economy should radically lead to high IQs is not reasonable, as the circle is vicious only in the downward direction. Monumental leaps in SES would not lead to dramatic effects in IQ, if the earlier SES levels were just sufficient to ensure that no negative effects of environment come into play. The Flynn effect is a tribute to the fact that high jumps in SES (above the base level) only lead to small incremental changes in IQ.
Another thing to keep in mind is that when the SES to low IQ causal link is suggested it is only for the achievement of the stable adult IQ and instrumental during the critical childhood developmental periods. Although, environmental toxins do have the capability to adversely affect IQ during adulthood, and there is emerging evidence for plasticity and neurogenesis in adulthood, a simpler and reasonably model is whereby adult IQ is stable and not much affected by SES changes (either up or down) once it has been stabilized. Thus, even if some positive effects of rising SES have to be observed, they would be observable only in children exposed to that SES and not in the IQ of the rest of the adult population, that has already acheived a stable IQ.
Thus, I do not agree with your explanation of the east Asian example. To me the data set appears to be very limited ( no IQ results before the 1950's; no data sets for the same country or population over time) and even if we assume that only after the 1980s the SES of these countries rose above the minimal needed SES, we still do not have the data for the IQ of children born under theses SES condition, to proclaim that ther eis no rise in IQ.
Further, it is quite plausible that productivity is dependent on many other factors than IQ, some of which are directly related to SES independent of IQ. Given a base level of SES, in which the East Asians had managed to develop their inherent genetic IQ to the fullest, the SES may still not be good enough to convert that IQ advantage to productivity. For example, a given household that has sufficient SES to provide good nutrition and health care, and thus ensure that its children archive their full IQ potentiality, may still not have enough resources to send them to a good school (or any school for that matter), may lack access to basic infrastructure support which handicaps the utilization of its intelligence and so on. Thus despite having the human capital, lack of the more prosaic monetary capital, may prevent them from archiving their full productivity. Thus, IQ may increase first to the maximal achievable level and only then SES increase dramatically.
It would be interesting to turn the East Asian example on its head and beg the question that if IQ is the definitive causal relation leading to SES , how do you explain the anomaly that despite high IQ's in 1950s (or for that matter Asian big brain since time immemorial) he East Asian countries did not have the corresponding productivity levels or SES. You might counter by saying that IQ -> SES causal link is mediated by factors like free markets, reforms etc to ensure that proper economic conditions are in place etc etc and only if these ideal market conditions are in place then only IQ predicts SES.
To that my simple counter-argument would be that SES -> IQ causal link also works but only in conditions when the SES is below the base level and that SES would not predict IQ absolutely. Given the same optimal SES in differnet countries, different cultures (which have different genetic pools) will have different IQ levels based on their inherent genetic capabilities.
As per this the IQ of east asians can be explained as either arising from the fact that they have already archived the SES required for full flowering; or that they still have to archive their highest IQ levels and their IQ levels are genetically vastly superior and may show more rise in future.
From Anecdotal evidence I can tell you that an average Indian has far more intelligence and creativity potential that the average IQ of 82 would suggest; most of the high SES families that have archived that high IQ migrate to US/ west and archive high SES there.
What brings down the national average is the sad fact that still a lot of Indians live below the poverty line - living in sub-optimal SES conditions that leads them to have low IQ' than what their genes or genetic makeup would suggest.
Looking forward to a fruitful discussion.
PS: Despite the tone of my original mail, I have high regards for economists in general and people like Amartya sen, Kahnman and Traversky in particular.
Interesting blog entry. Has the author of it actually read the paper he is criticizing? I noticed that it costs $15 online. If not, is the author of the blog certain that the statistical methods employed by Kanazawa do not take his complaints into account implicitly? One hopes that the author is not criticizing a peer-reviewed scientific paper without having read it.
Sandy G said...
It would be better if, after having read the paper (otherwise by your own high standards you wouldn't have defended an article without having read it first), you would be kind enough to tell the readers of this blog how Kanazawa has taken the effects of low SES-low IQ developmentally mediated effect in consideration in his study.
You are correct in guessing that I haven't read the article (I believe in free access; so neither publish nor read material that is not freely available). I'll welcome if you or someone else could mail me the relevant portions or post them on this blog (under fair use).
As for invoking authority covertly by referring to peer-review in a prestigious journal, I would like to disclose that I haven't taken a single course or class in psychology- either in school or college- so if authority is the determinant: you can stick to reading articles in scholarly journals by those who have doctoral degrees. Blogs are not for you. Otherwise, if you believe more in open discussions and logical arguments, lets argue on facts and study method weaknesses etc and rely more on public-review to catch any discrepancies.
What I could gather from the abstract was that "The macro-level analyses show that income inequality and economic development have no effect on life expectancy at birth, infant mortality and age-specific mortality net of average intelligence quotient (IQ) in 126 countries". I take this to mean, that SES has no effect on longevity , if the effects of IQ are factored out. the 'if' is very important. This a very perverse position. This assumes that longevity is due to IQ and if IQ mediated difference in longevity data is factored out, the effcets on longevity of SES are negligible. This depends on an a priori assumption that longevity is primarily explained by IQ; and only after taking its effects into consideration, we need to look for an effect of SES on longevity.
What prevents the other, more valid and real interpretation : that SES predicts longevity and that there is little effect of IQ on longevity net of SES. Here the variation in longevity is explained by SES and after taking that into account, it would be found that, independent of IQ as a consequent of SES, IQ by itself would have little effect on longevity. the same set of data leads to this interpretation, because IQ and SES are related to a great degree and both are also related to longevity. It is just a matter of interpretation, that which is the primary cause and which an effect.
To take an absurd position, I can argue that longevity predicts/ causes both SES and IQ and reverse the causal link altogether. One can take a theoretical stand, that if people live longer , we have more labor force, blah, blah,blah... so more prodcutivity so better SES; further longevity menas that there are more wise old folks in the society and as IQ is mostly deterinmed by social influences (I do not subscribe to this, I am just taking an absurd position to show the absurdity of Kanazawa position), hence longevity of the population(more wise men) causes high IQs.
Also, please note that the above conclusion is only for the macro data he has. That interpretation is independent of his micro level data that found that self-reported health was more predicted by IQ than by SES. That micro data has nothing to do with the interpretation of the macro data. Again I don't know where he got the micro data, but I'm sure that would be a developed world population sample.
I am somewhat familiar with the macro data on which he is basing such claims, and there I do not see any reason to prefer his interpretation over other more realistic interpretations.
In the future, lets discuss merits of arguments, and not resort to ad hominem attacks over whether someone is qualified to make an argument or not. (in my opinion, by reading an abstract too, one can form a reasonable idea of what the arguments and methodologies employed are, and is thus eligible to comment)
Sphere: Related Content
Friday, October 03, 2008
We present six experiments that tested whether lacking control increases illusory pattern perception,which we define as the identification of a coherent and meaningful interrelationship among a set of random or unrelated stimuli. Participants who lacked control were more likely to perceive a variety of illusory patterns, including seeing images in noise, forming illusory correlations in stock market information, perceiving conspiracies, and developing superstitions. Additionally, we demonstrated that increased pattern perception has a motivational basis by measuring the need for structure directly and showing that the causal link between lack of control and illusory pattern perception is reduced by affirming the self. Although these many disparate forms of pattern perception are typically discussed as separate phenomena, the current results suggest that there is a common motive underlying them.
Michael Tomasello has a new book out titled " The origins of human communication" and the book seems to be promising, though has been a bit harshly reviewed at the Babel's Dawn. In it Tomasello proposes that a pre-requisite for language is 'a psychological infrastructure of shared intentionality'. It is based on Jean Nicod lectures and you can read a review here too.
What I am most interested is in this intentionality business. I have commented on orders of intentionality previously and this shared intentionality seems to fit the third order of intentionality that I proposed was necessary for communication.
But first for the premise of the book:
Tomasello opens his book with a consideration of the “infrastructure” that enables people to tell one another things. Apes do not have this infrastructure and the absence leads to scenes like this one:
A “whimpering chimpanzee child” is searching for its mother; the other chimps in the area are smart enough and social enough to recognize why the chimpanzee is whimpering; sometimes one of the chimps present will know where the mother is, and of course chimps have the physical ability to raise an arm point out the mother; even so, chimpanzees never help forlorn infants by pointing to the mother.
There is a straightforward, Darwinian explanation for the ape’s mum’s-the-word behavior. Individuals don’t help non-kin. There is nothing in it for the informed adults to help the whimpering child of another. But Tomasello comes at the question from another perspective. Humans typically do help out whimpering children, even if the child is a stranger. An adult, happening upon a solitary, unknown, whimpering child is very likely to stop and ask what is wrong, take charge, and stick around until the problem is resolved. This activity strikes us as perfectly natural, normal behavior, even though it is contrary to so many of the rules in Darwin’s book. What, Tomasello wonders, is there about humans that makes such behavior easy and routine? His answer: “a psychological infrastructure of shared intentionality” [p. 12].
Thus, the premise is that pro-social behaviour and the shared intentionality underlying it are the pre-requisites for any meaningful language to evolve. And for this some tools are required.
The psychological tools Tomasello refers to are cognitive and emotional. The cognitive tools give us the understanding to engage in joint purposes and joint attention. The emotional tools provide us with the motivation for helping and sharing with others. These tools enable people to act together on a “common ground.”
Ebolles goes on further to speculate that this could be tied to Autistics' difficulty with language and I concur that the cognitive deficits related to intentionality as opposed to affective deficits empathy or mindblindness may be the roots of Autistics' language and communicative difficulties. We already know that they lack ToM to an extent and they also have communicative and social difficulties; might lack of shared intentionality, or intentionality at all or the lack of feeling of one has an intentional agent, lie at the heart of the autism issue?
Immediately one can imagine all sorts of peculiarities that would arise in people who lack some part of these needs. Some people might have the prosocial motivation but not the cognitive ability to form a bird’s eye view. Perhaps autistic-spectrum disorder includes this difficulty. Others might have the cognitive ability, but not the prosocial motivation. There’s your sociopath, in a nutshell.
I think this common ground and 'infrastructure of shared intentionality' concept is awesome and I intend to read the book and review it soon on this blog.
I had speculated in one of my earlier posts that Glutamate , GABA, Glycine and aspartate may be involved in classical conditioning / avoidance learning. To quote:
That is it for now; I hope to back up these claims, and extend this to the rest of the 3 traits too in the near future. Some things I am toying with is either classical conditioning and avoidance learning on these higher levels; or behavior remembering (as opposed to learning) at these higher levels. Also other neurotransmitter systems like gluatamete, glycine, GABA and aspartate may be active at the higher levels. Also neuro peptides too are broadly classified in five groups so they too may have some role here. Keep guessing and do contribute to the theory if you can!!
Now, I have discovered an article that links Glutamate to classical conditioning. It is titled Reward-Predictive Cues Enhance Excitatory Synaptic Strength onto Midbrain Dopamine Neurons, and here is the abstract:
Using sensory information for the prediction of future events is essential for survival. Midbrain dopamine neurons are activated by environmental cues that predict rewards, but the cellular mechanisms that underlie this phenomenon remain elusive. We used in vivo voltammetry and in vitro patch-clamp electrophysiology to show that both dopamine release to reward predictive cues and enhanced synaptic strength onto dopamine neurons develop over the course of cue-reward learning. Increased synaptic strength was not observed after stable behavioral responding. Thus, enhanced synaptic strength onto dopamine neurons may act to facilitate the transformation of neutral environmental stimuli to salient reward-predictive cues.
Though the article itself does not talk about glutamate, and nor does this Scicurious article on Neurotopia, commenting on the same , which focuses more on the dopamine connection, still I believe that we have a Glutamate connection here. First let us see how the artifact under discussion is indeed nothing but classical conditioning:
The basic idea is that, when you get a reward unexpectedly, you get a big spike of DA to make your brain go "sweet!" After a while, you being to recognize the cues behind the reward, and so seeing the wrapper to the candy will make your DA spike in anticipation. But it's only very recently that we've been able to see this change taking place, and there were still lots of questions as to what was happening when these changes happen.
So the authors of this study took a bunch of rats. They implanted fast scan cyclic voltammetry probes into their heads. Voltammetry is a technique that allows you to detect changes in DA levels in brain areas (in this case the nucleus accumbens, an area linked with reward) which represent groups of cells firing. So the rats had probes in their heads detecting their DA, and then they were given a stimulus light (a conditioned stimulus), a nosepoke device, and a sugar pellet. There is nothing that a rat likes more than a sugar pellet, and so there was a nice big spike in DA as it got its reward. So the rats figured out pretty quickly that, when the light came on, you stick your nose in the hole, and sugar was on the way. As they learned the conditioned stimulus, their DA spikes in response to reward SHIFTED, moving backward in time, so that they soon got a spike of DA when they saw the light, without a spike when they got the pellet. This means that the animals had learned to associate a conditioned stimulus with reward. Not only that, the DA spike was higher immediately after learning than the spike in rats who just got rewards without learning.
So, if we consider the dopamine spike as an Unconditioned Response, then what we have is a new CS-> CR pairing or classical conditioning taking place. Now, the crucial study that showed that the learning is mediated by Glutamate: (emphasis mine)
To find out whether or not excitatory synapses were in fact changing, they authors conducted electrophysiology experiments in rats that were either trained or not trained. Electrophysiology is a technique where you actually put a tiny, tiny electrode into a cell membrane. When that cell is then stimulated, you can actually WATCH it fire. It's really very cool to see. Of course all sorts of things are responsible for when a cell fires and how, but what they were looking at here were specific glutamate receptors known as AMPA and NMDA. These are two major receptors that receive glutamate currents, which are excitatory and induce cells downstream to fire. What they found was that, in animals that had been trained to a conditioned stimulus, AMPA and NMDA receptors had a much stronger influence on firing than in non-trained animals, which means that the synaptic strength on DA neurons is getting stronger as animals learn. Not only that, but cells from trained rats already exhibited long-term potentiation, a phenomenon associated with formation of things like learning and memory.
But of course, you have to make sure that glutamate is really the neurotransmitter responsible, and not just a symptom of something else changing. So they ran more rats on voltammetry and trained, and this time put a glutamate antagonist into the brain. The found that a glutamate antagonist completely blocked not only the DA shift to a conditioned stimulus, but the learning itself.
From the above it is clear that Glutamate , and the LTP that it leads to in the mid-brain neurons synapses , is crucial for Classical conditioning learning. Seems that one more puzzle is solved and another jig-jaw piece fits where it should have. Sphere: Related Content