Methodological clarity required when publishing social science in natural science journals

This post is co-authored with Warren Pearce and first appeared on the Making Science Public blog on the 23/10/2015. It is concerns a correspondence which appeared in Nature Climate Change. You can check out the original version (and the lively comments section!) here: https://blogs.nottingham.ac.uk/makingsciencepublic/2015/10/23/methodological-clarity-required-when-publishing-social-science-in-natural-science-journals/

The latest issue of Nature Climate Change features a Correspondence from Peter Jacobs and colleagues which concerns a recent Letter that appeared in the same journal;our Reply is also published. We do not wish to deny that there are real and significant differences between ourselves and Jacobs et al but we think the correspondence also says something about the publication of qualitative research studies in prestigious natural scientific outlets. This Correspondence/Reply received three-peer reviews: one appeared unsure if our reply was worthy of publication; one felt Jacobs et al should be ignored; and a third, rather presciently, felt that both social scientists and natural scientists would think they had ‘won’ and return to their houses none-the-wiser. Given that Nature Climate Change has published many calls for qualitative and/or interpretive studies from the social sciences and humanities, we’d like to muse, very briefly, on what might be learned from this exchange for those writing, or reading, reports in the future. From our perspective, we’d like to highlight the importance, and difficulty, of inductive research.

We could think of hypothetic-deductive, hypothesis driven, research as being “top-down” – one starts with a theory or a question and then interrogates a data set in order to see how that theory stands in relation to the data. Inductive research is quite different and is “bottom up” – you start with the data, see patterns or interesting things, and the theories and broader claims are integrated later. This latter approach is certainly the one which we took in our Letter; the two main theoretical sources in that piece – Jeanne Fahnestock’s work on meaning in science (£) and Susan Leigh Star’s writings on certainty in science – were only incorporated after we had analysed the press conference transcript itself. This approach is incredibly common in the social sciences; Grounded Theory, the archetypal inductive approach, is quite possibly the most used research method. Inductive research is, however, far less common in the natural sciences (although some claim that Big Data is changing this).

When reading back the methods section of our Letter is noticeable that the word ‘inductive’ does not appear at all and the role of theory in our coding scheme (i.e. it played a largely insignificant role) isn’t as well explained as it might be. To a social scientific audience we don’t think this would be problematic but in this context we think it proved to be so; there is the strong sense that Jacobs et al think that we didn’t test our hypotheses rigorously but this is precisely because there weren’t any hypotheses. A perfect example of different answers arising from an unacknowledged difference in questions rather than anything fundamental about answers.

As calls (and pushes!) for interdisciplinary research intensify, our experience at the sharp end within a high-profile journal have taught us two basic but important lessons for future engagements: i) qualitative researchers need to be clear about what their methods are when they’re using when publishing in natural science journals, and ii) those more accustomed to hypothetic-deductive approaches need to at least be aware that qualitative methods frequently use different rules to those they most commonly encounter.

Advertisements

Improving climate change communications: Moving beyond scientific certainty.

Press conference screengrabThis post is co-authored with Warren Pearce and first appeared on the Making Science Public blog on the 08/06/2015. It is a summary of a paper which appeared in Nature Climate Change. You can check out the original version here: https://blogs.nottingham.ac.uk/makingsciencepublic/2015/06/08/ipcc-press-conference/ 

The paper was mentioned on Radio 4, written up in The Independent, and received (positive and negative!) write-ups on various blogs. A response to those comments is forthcoming.

In the last 25 years scientists have become increasingly certain that humans are responsible for changes to the climate. Nonetheless, there is a widespread feeling, memorably summed up in the Kudelka cartoon “None So Deaf”, that this certainty has failed to make climate change meaningful enough to prompt significant personal, political or policy responses. An opportunity to address this problem was presented by the publication of the IPCC’s Fifth Assessment Report in 2013, as leading climate scientists appeared before the world’s media at a Stockholm press conference. The press conference, one would imagine, is an ideal opportunity to make scientific findings intelligible and meaningful to non-scientific audiences and, in a new paper, we examine how scientists utilize this opportunity. What we found is that, in attempting to demonstrate the importance of climate change, scientists actually became inconsistent about ‘what counts’ as scientific evidence and this led to confusion and condemnation in the press.

In this blog post we’ll go through what happened in the press conference by detailing four stages in the intended process of making climate change meaningful. We’ve reproduced some figures for our article here because we think they make it much easier to follow the argument. The matrix for all of these figures places ‘meaning’ on the x-axis and ‘certainty’ on the y-axis. We won’t go into details here but, if you’re interested in the creation of the matrix please look at the method section of the article itself, or leave a comment below. Line numbers refer to a transcript of the press conference which is available in the paper’s supplementary information.

Phase One: Increasing certainty

FEB10 FIG 1 jpgSince the last IPCC report, certainty over the nature of climate change has increased and, unsurprisingly, scientists at the press conference stressed this increase:

…the evidence for human influence has grown since Assessment Report 4, it is now deemed extremely likely that human influence has been the dominant cause of the observed warming.” (Steiner L153–155).

However, and as noted above, academics from the socialsciences and humanities have argued that climate change has yet to attain enough public meaning to prompt significant personal, political and policy responses. Figure 1 thus shows a vertical shift along the y axis, representing increased certainty but also the fact that climate change continues to attract little public meaning.

Phase Two: Making climate change meaningful: the intention

FEB10 FIG 2 jpgWithin the press conference, scientists tried to use the certainty of climate change to demonstrate that it is meaningful and that ‘we’ must take action:

[The] report demonstrates that we must greatly reduce global emissions to avoid the worst effects of climate change. (Jarraud L90–92).

In Fig. 2 we represent this move with a horizontal shift along the x axis (to position 3); certainty remains but attached to this certainty is a great deal of public meaning. We argue that this is what the scientists wanted to achieve in the press conference: retaining the certainty of the report while adding meaning.

Phase Three: Making climate change meaningful: the reality

FEB10 FIG 3 jpgThere was, however, an inconsistency in the argument of the scientists. Scientists consistently drew on short-term temperature increases in order to give climate change meaning:

…the decade 2001 onwards having been the hottest, the warmest that we have seen. (Pachauri L261–263).

However, the scientists also understood these short-term temperature increases to be less certain than the overall theory of climate change:

periods of less than around thirty years. . . are less relevant. (Stocker, L582–583).

Thus, the meaningful, short-term, temperature changes were actually incorporated at the expense of certainty. While the intended move was therefore to the top-right quadrant (position three), the actual move was to the bottom-right quadrant (position four): meaning had been added but at the expense of certainty.

Phase Four: Inconsistent attempt to maintain public meaning and certainty

FEB10 FIG4 jpgDrawing on meaningful information like ‘the hottest decade’ proved problematic for the scientists for it is hard to see why the short-term increase in temperature during ‘hottest decade’ is very different from the short-term decrease in temperature witnessed during the 15-year ‘pause’. Journalists repeatedly asked scientists about the pause and, in particular, how they could be increasingly certain about climate change in the face of such an uncertainty:

Your climate change models did not predict there was a slowdown in the warming. How can we be sure about your predicted projections for future warming? (Harrabin L560–562).

Faced with these questions, scientists insisted that short-term temperature changes were irrelevant for climate science:

we are very clear in our report that it is inappropriate to compare a short-term period of observations with model performance. (Stocker L794–796).

Given the type of statement we saw during phase three it is perhaps unsurprising that this retreat led to confusion, incoherence, and criticism within the press conference.

Conclusion: uncertainty is meaningful

…the scientific reductionism of climate change makes consensus possible, but the result is, in some sense, irrelevant. The things that can be known with scientific certainty are not necessarily the most important to know.(Cohen et al., 1998, p.361)

Climate change is an area where consistent attempts are made to communicate the certainty of the science. As a result, a spotlight on scientific uncertainties may be seen as unwelcome. However, in the run-up to the United Nations climate summit in Paris, making climate change meaningful remains a key challenge and our analysis of the press conference demonstrates that this meaning-making cannot be achieved by relying on scientific certainty alone. When trying to engage the public about climate science, communicators should be aware that there is a tension between expressing scientific certainty (and focusing on long-term trends) and making climate change meaningful (by focusing on short-term trends) and, what is more, that this tension may be unavoidable. A broader, more inclusive public dialogue will have to include crucial scientific details that we are far less certain about and these need to be embraced in order to make climate change meaningful.

Image credit: screengrab from video recording of IPCC press conference, Sep 30th, 2013.

Sickened by modern farming?

This post first appeared on Spiked! Online on the 11/08/2009 (i.e. a very long time ago).  You can check out the original bells and whistles version there: http://www.spiked-online.com/newsite/article/7254#.U7J_U5RdWQA

 

Agriculture has provided great benefits to mankind, yet greens are keen to blame it for the swine flu pandemic.

The emergence of agriculture was among the most important events in the history of humanity. Evolutionary psychologists pinpoint the rise of agriculture 12,000 years ago as the time at which conventional selection pressure ended and human evolution effectively stopped (1). Yet while agriculture has been central to the development of society, both materially and intellectually, there are now many voices doubting its benefits.

The impact of agriculture on society has been profound and goes beyond merely providing greater control over the production of food. For example, the storing of produce necessitated by early agriculture allowed humans to commence the record keeping, fact finding, and commercialisation that would eventually lead to what we now call science. Thus both the agriculture, and the science it brought with it, that began during this period had overwhelmingly positive consequences for humanity.

The green zeitgeist of recent times, however, argues that agriculture, and the domination over nature that it necessarily entails, is ethically dubious and destructive. Followers of James Lovelock’s Gaia theory, in particular, have a peculiarly apocalyptic outlook on the future of humanity following science and agriculture. These fears may broadly be given the name ‘Medean fears’. As a lead-article in New Scientist recently stated, Lovelock may have picked the wrong classical character to embody Nature: ‘If we were to choose a mythical mother figure to characterise the biosphere, it would more accurately be Medea, the murderous wife of Jason of the Argonauts. She was a sorceress, a princess – and a killer of her own children.’ (2)

However, the green movement prefers to claim Gaia for itself and assigns her evil alterego Medea for others; supporting ‘anti-nature’ agriculture makes one a Medean.

While there have always been anarcho-primitivists and others who have opposed agriculture and its associated effects, the rise of fatalistic environmentalism has allowed Medean fears to become decidedly mainstream, from Hollywood films such as The Happening (3) to popular scientific publications such as New Scientist(4). Medean fears are so commonplace that they seem to have entered the collective consciousness.

An excellent recent example of Medean fears is swine flu. The story of swine flu has been constructed to demonstrate that humans and their agricultural habits create pandemics. The very name ‘swine flu’ generates Medean fears, reflecting not just unease over animals, but farmyard animals in particular. It is as if Napoleon and Snowball from George Orwell’s Animal Farm had somehow stumbled upon chemical warfare.

The name swine flu itself reflects how the virus has been reinterpreted to reflect latter-day Medean fears. Pandemics of recent times have been given the titles ‘Asian’ or ‘Russian flu’ (1890s) ‘Spanish flu’ (1918-1920), ‘Asian flu’ (1957-1958), and ‘Hong Kong flu’ (1968-1969). What is it about swine flu that dictates that it should be named after the host animal and not the country of origin, as has previously been the case?

It seems unlikely that this is because no such candidate existed, for while fears over a pandemic were almost immediate, the virus was quite clearly depicted in the media as being solely from Mexico. Indeed, ‘Mexican flu’ was the moniker adopted by the Israelis when the name swine flu was deemed ‘unkosher’ (5). Further, the naming does not appear to be a decision based upon good will towards Mexico. As already discussed on spiked, Mexico has still been widely ostracised (6).

Even in the 1980s when it was apparent that HIV/AIDS had transferred to the human population from primates, it does not seem to have necessitated naming the virus ‘primate immunodeficiency virus’. Rather, this naming trend appears to be a new phenomenon, with both the bird and swine flu viruses being interpreted quite differently to their predecessors. Numerous articles now directly claim that agriculture is inherently dangerous. Those who already opposed agriculture have used swine flu to demonstrate the validity of their claims.

Viva!, a charity which campaigns for veganism and the end of farming issued a press release stating: ‘There is no mystery about this if you look at the conditions in which pigs are reared. They are fed a battery of drugs almost daily from weaning to slaughter to fend off a host of diseases. The consequence is that their immune systems are shot to pieces and their bodies have become a playground for bacteria and viruses. And sadly, it is much the same for poultry.’ (7) Guardian food writer Felicity Lawrence, wrote: ‘Just as an unsustainable financial system caused the current banking crisis, the intensive farming of animals is at the heart of the swine flu pandemic.’ (8)

Caroline Lucas, the leader of the Green Party in England and Wales, has called for swine flu to be the catalyst for more wide-ranging reforms upon agricultural legislature, lest more severe Medean repercussions be experienced: ‘As evidence mounts of the links between the increasing intensification of pig and poultry production, and the spread of these animal-based epidemics that can be lethal to humans, it is even more urgent that ministers set up the thoroughgoing commission of inquiry which the Green Party first called for after the avian flu outbreaks a few years ago.’ (9)

Swine flu is being used as a means to demonstrate that agriculture is not a means to feed the world, but the cause of global catastrophe. Pandemics, however, including those associated with animals, have been present for centuries (as the Black Death illustrates) and that even in modern times zoonoses (infections that jump from animals to humans) have occurred in wild as well as captive populations.

These statements and many others reflecting the broader Medean fear are a cause for great concern. The rise of agriculture has never been smooth or without serious repercussions. But the notion that it either never was, or no longer is, of more benefit than harm deserves to be quickly dismissed. Medean fearmongers spread concern regarding immunisation, xenotransplantation, genetic modification, and advancements in fish farming that all have the potential to further enhance the utility of agriculture. Instead of celebrating these techniques for expanding agricultural opportunities, Medean fear mongers give us mysticism, fatalism, and an intense fear of utilising natural resources available to us.

This fear of Medea will ultimately have far more devastating consequences than swine flu because we rely on agriculture to supply our food. Misplaced and often mystical concerns which suggest we should shy away from intensive, mass-production techniques and new technology will have dreadful repercussions for humanity.

Kandinsky, New Objectivity, and ripping apart the furniture

This post first appeared on the Making Science Public blog on the 19/06/2014.  You can check out the original bells and whistles version there: http://blogs.nottingham.ac.uk/makingsciencepublic/2014/06/19/kandinsky-new-objectivity-and-ripping-apart-the-furniture/

 

Circles, Squares, and nonrepresentational forms in Munich

Recently I visited Munich and, at the behest of a friend who knows far more about these things than me, spent a morning wandering around the Lenbachhaus art gallery[i]. The top floor of this spectacular building is largely taken up with paintings from the Blue Rider movement, which was active around Munich from around 1910-1914. Perhaps the most famous member of the blue riders was Wassily Kandisky and the Lenbachhaus offers a fantastic array of his work.

As you walk around the rooms of the Lenbachhaus you can see Kandisky’s work change quite remarkably; in the first years of the twentieth century, in paintings such as Der Blaue Reiter (1903), you can clearly make out stylised, yet certainly recognisable, buildings, trains, and – of course – horses. Ten years later, however, when Kandisky was fashioning his ‘impressions’ and ‘abstractions’, the outside world, at least in any recognisable form, completely disappears in a haze of colour. In the words of the Lenbachhaus, Kandisky’s improvisations:

 offer an especially clear illustration of his personal path to abstraction: to Kandinsky’s mind, abstraction meant a sustained effort to conceal and encode representational content in order to convey spiritual ideas in physical form by unfolding their “inner harmony.””[ii]

Kandisky is not turning away from the outside world here, and the Lenbachhaus note that although these paintings “…appear largely nonrepresentational…they were inspired by impressions of nature outside of him”. The impressions are, nonetheless, undoubtedly harder to read, nonintuitive, and (as the aforementioned friend discovered) can leave the uninitiated impressed but slightly confused.

What was particularly interesting about this gallery, however, was that the Blue Rider movement was, quite deliberately, juxtaposed with paintings from the New Objectivity movement. These paintings could not have been more different. Christian Schad’s Operation (1929) shows appendix surgery in photorealistic detail. Josef Scharl’s The Fallen Solider (1932) brings to mind Otto Dix (another artist associated with this movement) in its stark depiction of death in the trenches during World War 1. Without claiming that I’m offering anything like a definitive, or indeed accurate, reading of these works it seemed to me that these artists, who had grown up and often served during WW1, were saying: ‘it is all well and good to play with these experimental forms, aiming to reach a deep truth or new understanding through nonrepresentation, but The Somme is too important for that. The Somme must be shown in the cold, horrific light of day for the mass slaughter that it was’.

Circling the Square in Nottingham

Shortly after I’d returned from Munich I helped out at the conference we held here in Nottingham entitled Circling the Square[iii]. As anyone who attended that conference (or who has followed the ensuing blogs) will know, there has been a great deal of discussion regarding the types of knowledge, and the forms that knowledge is presented in, in relation to both the natural and social sciences. One strand of this debate has been a strident criticism of social scientific attempts to ‘deconstruct’ the facts of the natural sciences, in particular quantum physics and global warming. To me, much of this criticism strikes a chord with New Objectivity: it is fine for social scientists to warble away when concerning themselves with inconsequential areas of study, but the straightforward, empirical reality of quantum physics cannot be denied (as a contributor to this blog has told me, our microelectronics industry would be in poor shape without it) and global warming, like death in the trenches, should not be denied – the consequences are just too dire for the kind of obscurantism we associate with social scientists (and blue riders…).

In a lovely article[iv], Derek Edwards and colleagues refer to these two arguments, respectively as ‘furniture’ (the reality of which cannot be denied) and ‘death’ (the reality of which should not be denied) arguments against relativism. These types of questions certainly provoke soul-searching among social scientists: should I deconstruct the claims of climate scientists if I believe in this mode of analysis, or does the potential uptake of those arguments by climate sceptics necessitate that I stop? Does my refusal to apply post-structuralist theory to quantum physics demonstrate a laudable restraint or a failure of nerve? Is there space for a blue rider in Verdun?

Of course, I don’t have answers to these questions but, in the hyperbolic fall out of this conference, it’s interesting to see parallels elsewhere. Indeed, I think that is one of the values of social scientific analyses in these areas; these methods and theories might help us think about some of these issues – or ourselves – differently, provide new insights, or perhaps simply provide some pleasure. What is more, it is perhaps worth remembering that, in the Lenbachhaus circa 2014, there remains space for both.

 

[i] http://www.lenbachhaus.de/index.php?id=10&L=1

[ii] http://www.lenbachhaus.de/collection/the-blue-rider/kandinsky/?L=1

[iii] http://www.nottingham.ac.uk/conference/fac-sci/circling-the-square/index.aspx

[iv] Edwards, D., Ashmore, M. & Potter, J., 1995. Death and Furniture: The rhetoric, politics and theology of bottom line arguments against relativism. History of the Human Sciences, 8(25-49), pp.25–49.

Autism, sociality, and human nature

This post first appeared on Somatosphere.net on the 18/06/2014.  You can check out the original bells and whistles version there: http://somatosphere.net/2014/06/autism-sociality-and-human-nature.html

This post was subsequently taken up and published in full by Making Science Public on the 26/06/2014.  You can check out that version here: http://blogs.nottingham.ac.uk/makingsciencepublic/2014/06/26/autism-sociality-and-human-nature/

This post was then (!) taken up by Pandaemonium on the 01/07/2014.  You can check out that version here: https://kenanmalik.wordpress.com/2014/07/01/how-autism-became-a-window-to-the-soul/

There are, I believe, a few reasons to suppose that autism is a particularly fascinating area to be studying at the moment.  What are those reasons?  Firstly, prevalence rates of autism have soared in recent decades, from 1:2,500 in 1978 to around 1:100 today: a staggering 25-fold increase.  Secondly, and simultaneously, the nature of those receiving a diagnosis of autism has changed considerably.  To give just one example, in the 1980s no more than twenty percent of individuals diagnosed with autism had an I.Q. above 80.  Today, by contrast, it is widely argued that “intellectual disability is not part of the broader autism phenotype…[and] the association between extreme autistic traits and intellectual disability is only modest” (Hoekstra et al. 2009: 534).  Whatever you make of I.Q. scores, this changing profile means that it is reasonable to assume that when you meet somebody with autism today they are quite unlikely to be similar to someone you would’ve met with the same diagnosis just thirty years ago.  Thirdly, as the number of people diagnosed with autism has increased, and as the capabilities of those individuals has increased, a (self-)advocacy network of enormous importance and influence has arisen, perhaps on a scale hitherto unseen.  When woven together, these dynamic elements have led Ian Hacking to claim that, in autism, “we are participating in a living experiment in concept formation of a sort that does not come more than once in a dozen lifetimes” (Hacking 2009: 506).  This, I think, is quite exciting.

Finally, and perhaps differently, over the past thirty years autism has become an all pervasive cultural experience.  ‘Autistic fiction’, for example, has become a recognised genre.  And when I talk of ‘autism fiction’, think not only of Rain Man and The Curious Incident of the Dog in the Night-Time but of all those times that autism is used as a ‘prop’ or ‘prosthetic device’ to explore humanity in toto(Murray 2008: 163).  Just last week I found myself watching The Machine, a dystopian film in which badly brain damaged war veterans have computer chips implanted into their brains with the aim of allowing them to return to ‘normal functioning’ (read: become super-soldiers).  As you might imagine, this experiment does not end well.  What I find particularly interesting, however, is the manner in which these scientists come to realise that these militarised cyborgs are less than human: they fail the Sally-Anne Test, one of the oldest psychological tests for autism.  “Facts are just facts” says Paul the cyborg, unable to grasp that the world could appear different to a second person.  And so it is within The Machine: as with a great deal of fiction (and, as I’ll argue below, within particular academic disciplines) what is missing in autism is taken to reveal something fundamental about what needs to be presentin order to be human.

How did this situation occur?  How did autism which, until quite recently, was an unusual diagnosis of little broader concern, come to hold a central place in debates over human nature?  That’s what I’d like to think about in this essay.  My argument, in short, is that the thing which is ‘missing’ in autism, crudely put, is assumed to be social functioning and this is crucial when it comes to understanding why autism is taken to be so important for the human.

 

Re-constructing the social

Those working in disciplines such as science and technology studies and anthropology are used to debates over the nature of ‘the social’.  It is perhaps more surprising, however, to find that the experimental human sciences have engaged in similar debates (see, Danziger (2000) for an overview).  Consider someone praying, alone, in front of an altar.  Is this a social behaviour?  Most psychologists working before 1950, certainly 1920, would probably have answered ‘yes’; the activity is demonstrably being shaped by, and takes the form it does, because of that person’s previous social experiences and group membership.  It seems exceedingly unlikely that someone who had never been immersed in the traditions of the church would find themselves praying at this alter, in this physical position.  When the social is understood in this way, it is not that some behaviours (or cognitions, or attitudes, or emotions…) are always social and others are always non-social, rather, individual background and context is crucial to making a judgement.  John Greenwood gives the example of a pro-life stance on abortion – this could be a social belief if it is held because of, for example, membership of the Catholic Church, and it could be a non-social belief if that belief has been arrived at ‘individually’ and ‘rationally’ (Greenwood 2004: 21).

If you were to ask experimental social psychologists and neuroscientists the same question today, we would find the opposite answer most frequently given: praying is not a social behaviour.  The reason for this is that, within today’s experimental psychology and neuroscience, the social is characterised by two features.  Firstly, within contemporary thinking, the social refers toobjects of cognition (the things which our cognitions are directed towards) and not forms of cognition (the particular shape of those cognitions).  Cognitions, or behaviours, which are present, or altered, by group membership (such as praying) are not social under this framework.  Instead, a social cognition is simply one related to the understanding of other people in the immediate vicinity.  Similarly, a social emotion is something like empathy or sympathy: an emotion whose object is another person.  The social is, therefore, now primarily about interpersonal engagement.  Kurt Danziger has called this construction ‘a social in the shape of a crowd’, a concept that captures the idea eloquently; immediate, interpersonal behaviour is always social, while nothing else ever is.  A second feature of this novel understanding of the social is that the cognitive processes responsible for social behaviours (like mimicry or empathy) are also responsible for various non-social behaviours.  To give a famous example, cognitive psychologist Alan Leslie (1987) proposed that the ‘innate cognitive module’ which allows us to understand other people’s minds (i.e. to empathise with other people) also allows us to engage in pretend play (e.g. to pretend that a banana is a telephone).  Thus, the social ceases to be a qualitatively distinct from the non-social, relying upon the same cognitive and neurological process.

Within the experimental human sciences this is a really significant shift in understandings of the social.  Under this new regime the social is individualised, essentialised, and biologised, becoming a property of individual persons outside of context, individual or institutional history.  I have an innate, biological capacity to feel empathy and this capacity lies at the heart of my social being.  In other words, ‘the social’ is not something that shapes us throughout one’s lifetime, it is something that we are inherently and naturally.

 

Giving autism its form

An exception to the rule that we are inherently social creatures is believed to be found in autism.  As described in the introduction, social abnormally is taken to be a, or even the, primary symptom in autistic spectrum conditions.  At the most general level, I think we can easily show that the description of autism as social disorder is reliant upon the contemporary construction of the social, outlined above.  In psychology’s first sense of the social, where praying is social, individuals with autism are demonstrably able: as noted earlier in this essay, many individuals with autism take part in one of the most significant self-advocacy movements of all time.  People with autism are clearly able to join groups, have their behaviours shaped by membership of those groups, and so forth.  It is only when the social is understood as being related to interpersonal conduct that autism becomes conceivable as social disorder residing within an individual who has difficulty with, for example, feeling empathy.

What I’ve argued elsewhere (Hollin 2014) is that as cognitive theories of autism began to dominate the field of research during the 1980s, autism became dependent upon this contemporary construction of the social in a more profound way, and this dependence marked a definite break from previously dominant psychoanalytic conceptions of the condition.  In a sense, and attempting to follow Mary Douglas, I think that the nature of autism has been made to conform to this idea of the social.

To take the most obvious example of autism cohering to this new construction of the social: autism symptomology has come to increasing place cognitive capacity alongside overtly interpersonal behaviour.  This is exactly what one would expect given that, as outlined above, the modules which allow us to be social and interact with other people are also taken to govern other cognitive capacities.  Take the theory of Weak Central Coherence (WCC), for example, first articulated in the late 1980s (Frith 1989) and still popular today.  At least in its initial guise, the theory of WCC suggests that one cognitive mechanism, which integrates disparate pieces of incoming information (think here of the skill to ‘read for context’ – you know that ‘the minute speck of dust’ and ‘theminute past the hour’ should be pronounced differently because you’ve integrated the words’ surrounding context) is responsible for bothinterpersonal impairment and exceptional or savant abilities on various jig-saw like tasks.  This linkage, I suggest, has been made possible by the belief that the cognitive processes responsible for social behaviours are also responsible for various non-social behaviours, something basically unimaginable in other frameworks.  Within this framework some behaviours (like puzzle solving skills) are drawn closer to the centre of autism as order and coherence is seen where previously there was none.  Meanwhile, the threads linking autism and other behaviours are cut; Bonnie Evans has written, for example, on disappearance of ‘autistic fantasy’ from within the contemporary scientific literature (Evans 2013).

 

Autism and human nature

For those of us interested in autism I think these changes are both interesting and important.  However, they do not explain, to return to a point made in the introduction, how what is missing in autism is now taken to reveal something fundamental about what needs to be present in order to be human.  This change, too, can also be related to the shifting conception of what it means to be social.

From around the middle of the twentieth century, psychology, neuroscience, and evolutionary biology have begun to coalesce around the idea that being social, and in particular being able to emphasise with other people, lies at the core of human nature.  There is not space here to investigate this trend within the bio- and human sciences more fully (Vincent Duclos’ interview with Allan Young in Somatosphere is a good place to start; Duclos 2013), but the point can be made rather simply.  It is increasingly believed that human evolution has not only been shaped by ancient physical environments but also ancient socialenvironments – and by social environments we very much mean social in the contemporary sense, the presence of other people.  Our ‘social brain’ has evolved and now functions specifically to understand those around us.  Given that autism is so often characterised in terms of a lack of empathy, an inability to comprehend those around us, it is unsurprising that the condition has begun to take on importance within narratives of the social brain.  Continuing a long standing tradition within the psy-disciplines of examining ‘normal’ cognitive functioning in those cases where that function is perceived to be lacking, autism offers a pure case of human-minus-social.  The individual with autism has thus become an example of what Will Viney has called inherently useful humans, useful “not for what they do, simply for what they are” (Viney 2013).  Thus, the social hole in autism is actually a window to the soul.  This, I believe, is how autism has come to stand at the centre of what it means to be human.

From what I can see, there is no reason to suppose that these relations between the social, the human, and autism have stabilised.  Indeed, this recent belief that autism is key to understanding human nature hints at an avenue of potential change. For, while I’ve argued above that autism has been made to ‘conform to an idea of the social’, I think one could legitimately claim that, increasingly, the social is made to conform to the idea of autism.  If your logic stresses that autism is a form of social dysfunction and, therefore, anything functioning in autism cannot be social, then the concept of autism has become very powerful indeed.  I would (very tentatively) suggest that autism research which found mirror neurons to be functioning in individuals with autism contributed significantly to the declining status of those neurons when it comes to understanding ‘normal’ social functioning.  For these reasons, and many more besides, I think an understanding of ongoing autism research within the bio- and human sciences will be one of the key tasks of the medical humanities in the coming decade.

 References

Danziger, K., 2000. Making social psychology experimental: A conceptual history, 1920-1970. Journal of the History of the Behavioral Sciences, 36(4), pp.329–347.

Duclos, V., 2013. When anthropology meets science: An interview with Allan YoungSomatosphere. Available at: http://somatosphere.net/2013/10/when-anthropology-meets-science-an-interview-with-allan-young.html [Accessed June 18, 2014].

Evans, B., 2013. How autism became autism: The radical transformation of a central concept of child development in Britain. History of the Human Sciences, 26(3), pp.3–31.

Frith, U., 1989. Autism: Explaining the Enigma 1st ed., Cambridge, MA: Blackwell.

Greenwood, J.D., 2004. The Disappearance of the Social in American Social Psychology, Cambridge, UK: Cambridge University Press.

Hacking, I., 2009. How we have been learning to talk about autism: A role for stories. Metaphilosophy, 40(3-4), pp.499–516.

Hoekstra, R.A. et al., 2009. Association between extreme autistic traits and intellectual disability: Insights from a general population twin study. The British Journal of Psychiatry, 195(6), pp.531–536.

Hollin, G.J., 2014. Constructing a social subject: Autism and human sociality in the 1980s. History of the Human Sciences.

Leslie, A.M., 1987. Pretense and representation: The origins of “ Theory of Mind .” Psychological Review, 94(4), pp.412–426.

Murray, S., 2008. Representing Autism: Culture, Narrative, Fascination, Liverpool: Liverpool University Press.

Viney, W., 2013. Useful humans. The Wonder of Twins. Available at:http://thewonderoftwins.wordpress.com/2013/03/15/useful-humans-twins-research-and-the-question-of-use/ [Accessed May 14, 2013].

Is Asda right about mental health?

This post first appeared on the Making Science Public blog on the 02/10/2013.  You can check out the original bells and whistels version there: http://blogs.nottingham.ac.uk/makingsciencepublic/2013/10/02/is-asda-right-about-mental-health/

Is Asda right about mental health?

The obvious answer to the question above is ‘no’; a finer example of Betteridge’s Law of Headlines is not easily found[i].  The decision by Asda – who sold a ‘mental patient fancy dress costume’ complete with torn white shirt, fake blood, and a plastic cleaver, and Tesco – who sold a similarly sensitive ‘psycho ward’ costume[ii], was quite evidently misplaced.  These supermarkets’ actions, and their representations of mental health, were undeniably wrong whichever angle (ethically, factually, aesthetically) one believes to be the most important.  It is therefore not surprising that those involved in campaigns such as Time to Change were delighted that the costumes were removed from sale[iii].  Nonetheless, the idea that the moralised pictured of mental health presented by Asda reflects not (only) misguided representations of mental illness but also something about the very nature of mental illness itself is an interesting one.  Considering Asda’s position, alongside those who have dismissed it, offers the possibility of a more nuanced consideration of what exactly mental illness is, as well as possible forms of response.

As a campaign, Time to Change captures the essence of the criticism made towards Asda, their mission statement being to ‘end mental health discrimination’.  One portion of Time to Change’s website is entitled ‘what are mental health problems?’ and this section of the ‘site lists four myths and four facts relating to mental health.  One of these facts can be used, for current purposes, to work through issues of what exactly mental health is:

“Nine out of ten people with mental health problems experience stigma and discrimination.”[iv] 

What can we learn from this statement?  Firstly, (and perhaps most sensibly) this statement tells us that the majority of individuals diagnosed with mental illness suffer in a way embodied by the costumes described above; they are assumed to be violent, and are subject to parody, abuse, and derogatory terminology.  A second claim here is that it is clearly possible for stigma and abuse to be entirely removed from discussions surrounding mental health: if one in ten people with mental health issues are able to avoid stigma and discrimination then it is demonstrably true that such value-based representations are not an inherent part of the classification itself.  Thus, within this sentence, there is actually a third claim about the relationship between mental illness and the society, namely, that the value-based representations of a society are not part of the disease construct itself which exists, in toto, outside of a cultural setting.  In the remainder of this blog I would like to, briefly, contest the second and third of these claims and suggest that society’s values define the very nature of mental illness by contrasting it with normality.

            Where does the word ‘normal’ originate from?  The answer, as eloquently explained by Ian Hacking, is geometry:            

“It meant perpendicular, at right angles, orthogonal.  Norma is latin, meaning a T-square.  Normal and orthogonal are synonyms in geometry; normal and ortho- go together as Latin to Greek.  Norm/ortho has thereby a great power.  On the one hand the words are descriptive.  A line may be orthogonal or normal (at right angles to the tangent of a circle, say) or not.  That is a description of the line.  But the evaluation ‘right’ lurks in the background of right angles.  It is just a fact that an angle is a right angle, but it is also a ‘right’ angle, a good one.”[v]

When applied to mental illness, the description of someone as ill or abnormal, therefore, seems to entail a crossing of the fact/value divide, an objective statement about how a person is behaving and a subjective claim about how they should behave. 

Georges Canguilhem, writing in the 1940s, claimed that this breach of the subject/object divide is not an avoidable entanglement of facts and values – as is suggested in claims two and three of Time to Change’s fact, above – but is rather an essential aspect of knowledge concerning the abnormal.  Canguilhem stated that there “is no pathological disturbance in itself: the abnormal can be evaluated only in terms of a relationship.”[vi]  In relation to mental health this claim suggests that it is only within the bounds of acceptable conduct, as described by society, that various forms of psychological disorder become apparent.  To take a particularly obvious example, a classification like Attention Deficit Hyperactivity Disorder (ADHD) only becomes visible within the bounds of a society which expects children to sit still and pay attention in school (for example).  To say this of ADHD does not deny its reality, that suffering is caused, or even that therapeutic avenues should not be explored.  It does however suggest that ADHD is both, unavoidably, a description and a moral judgement; it suggests that stigma and discrimination are an inherent part of the classification itself.  This idea was memorably captured by Canguilhem when he stated that “the sick man is not abnormal because of the absence of a norm but because of his incapacity to be normative.”[vii]

Conceptualising mental illness as both subjective and objective does not just make us think in new ways about what mental health is, it offers the possibility of new forms of conduct.  An acceptance that Asda’s moral judgement is in some sense an essential part of any disease classification does not limit the critique of Asda’s representation of mental illness but instead offers the additional possibility of changing mental illness itself, suggesting the potential of configuring ourselves and our society in such a way that various forms of moral judgement cease so that suffering slips away or change shape entirely; without the expectations of the schoolroom the treatment and experience of ADHD would be very different.  By changing society, we can change disease itself.  Don’t somatise, organise![viii]      

[i] https://en.wikipedia.org/wiki/Betteridge%27s_Law_of_Headlines

[ii]  http://www.bbc.co.uk/news/uk-24278768

[iii] http://www.time-to-change.org.uk/news/time-change-responds-asda-contribution

[iv] http://www.time-to-change.org.uk/what-are-mental-health-problems

[v] Hacking, I., 1990. The Taming of Chance, Cambridge, UK: Cambridge University Press. p.162-3

[vi] Canguilhem, G., 1991. The Normal and the Pathological, New York: Zone Books. p188

[vii] Ibid, p. 186

[viii] I first heard this phrase in relation to ‘The Age of Anxiety’, a summary of which can be found here http://chronicle.com/article/Our-Age-of-Anxiety/138255/

Making science public: The issue of language (jargon)

This blog originally appeared on 01/08/2012 on the Making Science Public Blog.  You can find the original here: http://blogs.nottingham.ac.uk/makingsciencepublic/2012/08/01/making-science-public-the-issue-of-language-jargon/

 

Making science public: The issue of jargon

Over recent days there has been a fascinating blog-based debate of great interest to the Making Science Public agenda. This debate focused on the nature of writing in the natural and social sciences. For the sake of simplicity, we’ll say this debate has taken place between the blogger Neuroskeptic , a neuroscientist, and Andrew Balmer, formerly of this parish and now a sociologist at the University of Manchester. Again for the sake of simplicity, we’ll say the debate has concerned Medical Sociology and Neuroscience. In reality, the discussion extends far beyond these two exceptional bloggers, and beyond the immediate academic disciplines to all of the (social) sciences. I don’t for a moment hope to resolve the debate but do think it can be used to consider the nature of knowledge and the social/natural science dichotomy.

Are Sociologists Cuttlefish?

He that uses many words for explaining any subject, doth, like the cuttlefish, hide himself for the most part in his own ink. John Ray (1627-1705)

The above quote essentially sums up the position of Neuroskeptic as far as medical sociology is concerned (a view expressed initially here: http://goo.gl/K0ujl).  As the objects analysed by medical sociology are known and used by us all (the examples of ‘overweight’, ‘masculinity’, and ‘man’ are considered in this article) technical terms like ‘somatic society’, ‘hegemony’, and ‘ideology’ do nothing but obscure the issue at hand. This is not a novel accusation. It was made famous by the Sokal Affair in which physicist Alan Sokal had utter nonsense published in the journal Social Text, apparently because the editors were unaware that the emperor was naked.  Even Michel Foucault, not always the easiest social scientist to engage with, is reported to have labelled the famously difficult Jacques Derrida obscurantisme terroriste ­­– terrorist or terrorising obscurantism. Neuroskeptic seeks to extend that critique beyond science and cultural studies to all social scientists and asks, “why don’t social scientists want to be read” he pleads!

This accusation is refuted by sociologist Andy Balmer. Balmer cites an abstract from Nature Neuroscience which does seem absolutely incomprehensible to the non-specialist. The only reason such technical terminology should be deemed permissible in neuroscience and not medical sociology is, as Balmer notes here (http://goo.gl/vsKwb), the:

“underlying assumption that our [sociologists’] technical terms are just there to make us sound clever whereas their technical terms are essential to properly characterising phenomena and communicating efficiently.”

Despite a subsequent back (http://goo.gl/D4z80) and forth (http://goo.gl/WnY54) this essentially captures the debate; extra-societal??? technical vocabulary is necessary in natural science and not in social science.  I firmly recommend reading those entries linked above (and the comments that follow them) for a more in depth, and suitably accessible, discussion. In addition readers can also consult two other blogs that grapple with similar issues one by Ed Yong and one by Alex Brown.

A Complicated Mater 

As someone writing in a sociology department it will be of no surprise that I lean towards Andy Balmer’s arguments here, although I think particular arguments on both sides have significant merit. What I’d like to do here, however, is examine two implicit points in the argument of Neuroskeptic which are relevant to Making Science Public:

1)      Natural Sciences deal with phenomena outside of everyday experience and develop new concepts and a specialised vocabulary to deal with these newly discovered phenomena.

2)      Technical, social scientific, language is outside of the grasp of ‘society’ in general.

Let us consider those two claims in turn. With regard to the first claim it is evidently true that natural and social sciences develop technical vocabularies to help them understand the phenomena they seek to explain. But are the technical writings of natural scientists really isolated from ‘everyday’ understandings of the ‘man-in-the-street’? Regular readers of this blog, will look upon this claim with the upmost scepticism: one need only look at Brigitte Nerlich’s past two blog entries to see the role of metaphor in the development of scientific concepts. One of my favourite examples, from the field of neuroscience under discussion in the above entries, comes from two quotes made famous by Michel Foucault. Foucault opens The Birth of the Clinic with two descriptions of the arachnoid mater, the protective membrane of the brain and spinal cord (Foucault, M., 2003. The Birth of the Clinic: An Archeology of Medical Perception, London: Routledge., p ix-x). Pomme, speaking in 1769, sees the mater as “pieces of damp parchment”, that peel away and are excreted by the patient.  Bayle, speaking in 1825, provides an acutely observed visual description of the arachnoid and the membranes, their colour, and their thickness. Compare those descriptions with Ian McEwan who in 2003 described the same mater as “innumerable branching neural networks sunk deep in a knob of bone casing, buried fibres, warm filaments with their invisible glow of consciousness” (McEwan, I., 2005. Saturday, London: Jonathan Cape., p.15). One could not imagine three more different descriptions! It would be impossible to tell that they even described the same object if we were not given that information in advance. I would agree with Ian Hacking here, that the differences in these three descriptions can be understood not only through changes in author and writing style but also because “the kinds of things to be said about the brain in 1780 are not the kinds of things to be said a quarter-century later” or indeed two centuries later (Hacking, I., 1986. The archaeology of Foucault. In D. C. Hoy, ed. Foucault: A Critical Reader. Oxford: Basil Blackwell Ltd, p. 30). This is because of a continual discussion between the triad of science-technology-society to which the natural sciences are not immune. As this blog has repeatedly shown, metaphors play an important role in this.

Consider now the second point, that social scientific language is outside of the grasp of society. As someone who studies psychology as an academic discipline this, too, seems an odd claim. Psychology, a discipline still only 130 years old, is also filled with technical vocabulary and yet a great deal of that vocabulary is now also that of the man-in-the-street.  We all negotiate terms from psychoanalysis – ego, libido, phallic symbol – with ease. We understand ourselves in terms of our intelligent quotients, neuroticism, and self esteem. When things go wrong with ourselves or others we deftly apply terms from psychopathology like depression, obsessive-compulsion and autism to explain them. This is science made public, technical vocabulary entering a new sphere. Maybe terms like ‘somatic society’, ‘hegemony’, and ‘ideology’ aren’t in general circulation at the moment, but we would be wise to err when assuming that will never be the case. Of course, relating this back to point 1, the manner in which terms are taken up by society will surely feed back into science and onto the work bench, and so on until the end of days.

Conclusion

In a sense, this post has run off at quite a tangent to the initial discussion between Andy Balmer and Neuroskeptic. If it is to contribute to that debate it is only to add more wood to the fire attempting to burn down the fence between the natural and the social sciences, a suggestion that the differences are quantitative and not qualitative and that the questions being raised should apply to science rather than a particular branch of it. In terms of Making Science Public, the argument has – once again – been to suggest that the relationships between science and society are more public than might sometimes be suggested, and that is the case regardless of discipline. That in itself is an important lesson for us to keep in mind.