Category: Cognitive Biases

  • Why I am becoming convinced that bifurcated argument on Social Media is detrimental

    Why I am becoming convinced that bifurcated argument on Social Media is detrimental

    During this current ‘not-plebicite’ season I have been asked several times on various social media platforms as to what my position on various aspects of the debate are. But, apart from one more sleep deprived enquiry about jurisprudence, I have decided that I won’t be posting on the topic. More than this, I am becoming convinced for a few reasons that the place for extended debate is not on social media.

    Much of this has come from revisiting some research I was involved in back when Tom was the ‘first friend’ you had, and Facebook was in its infancy (ok, it was 2006). Most of this research involved evaluating hugely expensive telepresence solutions such as the HP Halo system as means of improving computer mediated communication (CMC).

    Two aspects of this research I will briefly consider–and most of this post is drawn from a paper I wrote back in 2015, so some bits are dated, and it is written for academic presentation. On the upside there are footnotes 😉

    Emotional Confusion

    The first aspect is emotional confusion, which is often present within textual communication is often parodied in mainstream media. From the innocently worded text message being read in an unintended tone, to the innocuous social media message eliciting murderous responses. See that classic Key and Peele sketch on text message confusion here (language warning). The situations are so often parodied because they are highly relatable, many, if not all, of us have had similar experiences before. Why? Why does text on a screen elicit such powerful emotive responses, when the same message in other forms barely registers a tick on the Abraham-Hicks. Studies have shown that it is the sociality, or social presence of the medium that provides the best insight into the emotional regulation that can be so diversely represented in CMC.[ref]Antony S. R. Manstead, Martin Lea, and Jeannine Goh, ‘Facing the future: emotion communication and the presence of others in the age of video-mediated communication’, in Face-To-Face Communication over the Internet (Studies in Emotion and Social Interaction; Cambridge University Press, 2011), http://dx.doi.org/10.1017/CBO9780511977589.009.[/ref] Specifically in our case it is the factors of physical visibility, or more precisely, the lack thereof that impact on emotional regulation. When engaging in social interaction an enormous amount of social cues are communicated non-verbally, through facial features and mannerisms. Of course with CMC the majority of these are removed, and those that remain are relegated to the domain of various emoticons and emoji. Notably this devaluation of the majority of non-verbal social cues serves to reduce the salience of social presence, and therefore the corresponding salience of the interaction partner. It is this effect that has led several companies, including my previous employer, to invest millions into virtual telepresence systems in an attempt to mitigate the loss of visual cues and the salience of interpersonal interaction.

    What does that mean? Well in essence the vast majority of social cues for interpersonal interaction are removed on social media, and it is this context that assists in evaluating the emotional content of the message. From a social identity perspective Spears et al. found that within CMC based interactions both in-group and out-group salience and bounds were profoundly strengthened, and inter-group conflict was heightened.[ref]Russell Spears et al., ‘Computer-Mediated Communication as a Channel for Social Resistance The Strategic Side of SIDE’, Small Group Research 33/5 (2002): 555–574.[/ref] Furthermore, the degree of expression of these conflicts was also heightened along with the corresponding in-group solidarity expressions. Essentially, the majority of CMC interactions serve to strengthen positions, rather than act as bridges for meaningful communication. For more on that see my post a while ago on the Backfire Effect.

    Emotional Regulation

    The other side of this comes in terms of emotional regulation. On this Castella et al. studied the interactions found between CMC, video conferencing and face-to-face mediums and interestingly found that not only is there a heightened level of emotive behaviour for a non-visual CMC interaction.[ref]V. Orengo Castellá et al., ‘The influence of familiarity among group members, group atmosphere and assertiveness on uninhibited behavior through three different communication media’, Computers in Human Behavior 16/2 (2000): 141–159.[/ref] But also found that the emotive behaviour was significantly negatively biased. So it is not merely a heightening of all emotions, but as Derks et al also observed it ‘suggest[s] that positive emotions are expressed to the same extent as in F2F interactions, and that more intense negative emotions are even expressed more overtly in CMC.'[ref]Daantje Derks, Agneta H. Fischer, and Arjan E. R. Bos, ‘The role of emotion in computer-mediated communication: A review’, Computers in Human Behavior 24/3 (2008): 766–785.[/ref]

    Ultimately when people are emotionally confused, in low physical presence environments, they tend to react emotionally–and predominantly negatively. Hence, the majority of emotional expressions that will be found in CMC will be negative reactions from the extremes of any dialectical spectrum.

    What is the outcome of all of this then? Well simply put the very mechanism of computer mediated social media interacts with our own natural cognitive biases and produces an outcome that is predisposed towards burning bridges rather than building them. And this even before any considerations of social media echo chambers have been made (thats another post for another time).

    Where to?

    Sure, there will be always a plethora of anecdotal counters, but given human predisposition I think there is a better way. For me that better way is in person, in a setting where we can explore any conversation at length. So, if you want my views on the majority of controversial topics out there, come and talk to me over a coffee or beer.

  • The OODA loop and Cognitive Biases

    The OODA loop and Cognitive Biases

    This morning I gave a brief talk on several of the cognitive biases that have featured on here over the last few months, and their often stormy relationship with our rapid decision making/heuristic processes. During that talk I mentioned the OODA (Observe-Orient-Decide-Act) loop briefly, and a couple of people asked questions over morning tea on the loop. If you aren’t up to speed on the OODA loop then read this good article on it on the Art of Manliness blog here: http://www.artofmanliness.com/2014/09/15/ooda-loop/.  On the drive home I was considering at which levels our intrinsic cognitive biases affect the OODA loop. Using this diagram the Art of Manliness blog,t he majority of our cognitive biases unsurprisingly impact the Orient stage of the process.

    OODA-Loop-2-1

    However, even within this larger model (many other models just have the four stages  in linear or cyclic fashion) there isn’t any neat place for the biases. I would suggest that they act within the Orient stage as a sixth box, affecting the others, and also within the ‘Implicit Guidance & Control’ and Feed Forward stages of the O-O section of the loop. 

    Perhaps we need to do further thinking on where our biases affect our rapid decision making processes. Anyone have a version of the OODA model that incorporates biases tightly?

  • Why ‘We Are All Confident Idiots’ – The Dunning-Kruger Effect

    Why ‘We Are All Confident Idiots’ – The Dunning-Kruger Effect

    ‘ignorance more frequently begets confidence than does knowledge’ – Darwin

    Dunce's Cap.Why is it that a little bit of knowledge appears to super-inflate peoples estimation of their abilities? Whereas a significant amount of knowledge in a field makes one painfully aware of their own limitations. Take for example a novice car driver, or pilot. Once most individuals get over the initial fear and trepidation of driving or flying, they are at a significantly higher risk of accident, and also over-estimate their own competence at the task.[ref]Pavel, Samuel, Michael Robertson, and Bryan Harrison. “The Dunning-Kruger Effect and SIUC University’s Aviation Students.” Journal of Aviation Technology and Engineering 2, no. 1 (2012). http://docs.lib.purdue.edu/jate/vol2/iss1/6.[/ref] However, those experienced drivers and pilots conversely underestimate their competence at the task. Now this phenomenon has been observed regularly throughout history, as noted by Darwin, Bertrand Russell and many others. But it was with David Dunning and Justin Kruger’s 1999 JPS article that it was formally described — and subsequently entitled the Dunning-Kruger Effect.[ref]Kruger, Justin, and David Dunning. “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology 77, no. 6 (1999): 1121–34.[/ref]

    So what is the Dunning-Kruger effect? Well it is the tendency of those with little to no knowledge of a specific domain tending to inflate their self-assessments of their mastery of that domain. Simply put, people who know a little of a field think they know much more than they actually do. In fact in the study the participants test scores placed them lowly on the 12th percentile, but on average their self-assessment was around the 62nd percentile. That is a rather significant over-estimation of capability in an area. Conversely those who performed well in the text under-estimated their self-assessment. As Albert Einstein sagely observed: ‘The more I learn, the more I realize how much I don’t know’ and the inverse is true for novices.

    dkgraphFrom this it is relatively easy to see the application to academic fields. Students and novices in a field will have a tendency to over-estimate their knowledge in a domain, while those who are SMEs underestimate in their presentations. This is highly common in complex fields where people may be able to absorb a small amount at the lay level, and then extrapolate their knowledge out to the entire domain. Notably without accounting for the pitfalls, caveats and speed bumps along the way that the experienced person will be only too aware of. But this is also likely exacerbated in areas where people are engaging in inter-disciplinary work. Being an SME in one domain does not instantly sideline the Dunning-Kruger effect from any other domain you may engage in.

    Is this just self-aggrandisement or malicious hubris? Well, not quite, as Dunning observes this isn’t a conscious problem, it is a metacognitive issue. The Dunning-Kruger effect works at a level that is prior to any confirmation bias from cognitive dissonance or similar. As Dunning writes the effect ‘is “pre” cognitive dissonance. It’s not that people are denying their incompetence, they literally cannot see it in the first place, and so there’s nothing to deny or experience dissonance over’[ref]David Dunning AMA: http://www.reddit.com/r/science/comments/2m6d68[/ref] In fact logically one cannot see the effect of the bias, as to have the self-insight to recognise the ineptitude you need the expertise that you lack in that field.

    So how to combat it? Well as a metacognitive bias it is hard to combat simply by knowledge of the bias itself. Of course knowing about it may make you question your self-assessments. But it wont help you catch the thinking in the act, as you can’t see the over-estimation errors anyway. Rather you need do avoid making the error in the first place, and the way that Dunning suggests you do that is via learning. Competence and learning in the fields that you are engaged in is the way to stave off this bias. However, he also observes that you can mitigate against the effect in the early stages of learning through finding those who provide you with useful assessments and getting them to act as a sounding board or cabinet.[ref]http://www.reddit.com/r/science/comments/2m6d68/science_ama_seriesim_david_dunning_a_social/cm1jnlc[/ref] I would add one more aspect to mitigating the Dunning-Kruger effect: humility. A lot of the effect is about making out that you know something when you don’t. Here humility can help by recognising that you don’t know everything in a domain, and having the ability to outwardly acknowledge this. Of course this can be hard for SMEs as they are expected to know everything in that domain. Nevertheless, those three aspects: learning, sounding-boards and humility; will help with mitigating against the bias.

    By way of conclusion David Dunning has a fascinating article from late last year in the Pacific Standard available here: We Are All Confident Idiots (http://www.psmag.com/health-and-behavior/confident-idiots-92793) an interesting Reddit AMA here: http://www.reddit.com/r/science/comments/2m6d68 and the original paper is available here: http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf

    Tell me how you mitigate against this effect in the comments. But let me leave you with this wonderful piece of self-recognition from Dunning & Kruger’s original journal paper:

    Although we feel we have done a competent job in making a strong case for this analysis, studying it empirically, and drawing out relevant implications, our thesis leaves us with one haunting worry that we cannot vanquish. That worry is that this article may contain faulty logic, methodological errors, or poor communication. Let us assure our readers that to the extent this article is imperfect, it is not a sin we have committed knowingly.[ref]Justin Kruger and David Dunning, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments,” Journal of Personality and Social Psychology 77, no. 6 (1999): 1121–34.[/ref]

  • If All You Have is a Hammer… – Maslow’s Hammer (Confirmation Bias redux)

    If All You Have is a Hammer… – Maslow’s Hammer (Confirmation Bias redux)

    I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail. – Abraham Maslow[ref]Maslow, Abraham H. Toward a Psychology of Being. 1962.[/ref]

    In his observations of human psychology Abraham Maslow, of the famous/infamous Maslow’s Hierarchy of needs, noted the strange bias for people to use familiar methods to complete tasks, even if they are not ideally suited for it. Abraham Kaplan more candidly expressed it as: ‘Give a small boy a hammer, and he will find that everything he encounters needs pounding.’[ref]Abraham Kaplan (1964). The Conduct of Inquiry: Methodology for Behavioral Science. San Francisco: Chandler., 28.[/ref] Although my favourite visual expression comes  from the older colloquial usage of the ‘Birmingham hammer.’ Nevertheless, whichever expression is chosen, the intent is clear: people tend to use the same tools to accomplish the job, even if they may be the wrong ones.

    hammer-oThis trend is one that can be seen repeatedly throughout our society, from people and companies stubbornly sticking with outmoded methods of communication, through to DIYers using large blunt objects to persuade stuck objects to move. Maslow’s hammer appears to be all around us, and doesn’t necessarily seem to be going away. But, while the physical, technical and social implementations of Maslow’s hammer are all around us, I want to think about how it gets used from the perspective of our world views.

    First though a brief primer on world views. Simply put a world view is in many ways the lens through which you look at and interpret the world. Mine is thoroughly shaped by my upbringing in Australia, my parental influences, my education in the sciences (Math, Psych, Chem, Biol etc), my faith, and also the minutiae of the influences from the city I live in, the politics of the era, and many more. So when we interpret information, we are inevitably interpreting it through the lens of our world view.

    peanuts_happiness-2So what does this have to do with Maslow’s hammer? Well a bunch of our world view for intellectual pursuits comes from our training and education. Hence in this post I am calling it Maslow’s hammer, although there are some indications that it could be called other things. Maslow’s hammer resonates with me, likely through my Psych training influenced world view. This is where Maslow’s hammer highlights some of the strange decision making that we do in assessing arguments and evidence. It is probably best displayed by the slavish application of scientific method by some groups to almost every other discipline. As perhaps can be seen in some of Richard Dawkins’ twitter feed: https://twitter.com/richarddawkins/status/334656775196393473 Dawkins regularly attempts to apply his hammer (scientific reductionism) to the world around him, and upon finding a bolt (philosophy) attempts to hammer it into the hole with the same ferocity as the nails he finds.

    Of course Dawkins’ rigorous application of his worldview in the vein of Maslow’s hammer is on the extreme end of worldview application. However, I would propose that we all engage in this type of bias in various degrees. We each bring our experience and training to bear on the subject at hand, which is perfectly reasonable. But where the bias kicks into overdrive is where we apply our worldview to the exclusion of all other approaches.

    But if you were to highlight that this bias isn’t really a bias in its own right, you would be correct. In fact it is a different extrapolation of a cognitive bias we have already covered: the confirmation bias. However, in the original post in this series I looked at the confirmation bias as a mechanism of biased interpretation of external input, in this case the bias is applied outward. Maslow’s hammer applies confirmation bias upon our internal toolkit application and finds that we tend to apply the tools in our arsenal that we are most familiar with. Correspondingly ignoring tools that we may be less familiar with, but have better utility to that situation.

    0b6018c5beca2e3b2deccb224bfff135So how do we engage with and steer clear of Maslow’s hammer? I believe that one of the main methods is to be polyvalent scholars and thinkers. While in the renaissance period there were some scholars such as Leonardo DaVinci who were legitimately considered polymaths (Greek: learned in much), or subject matter experts (SMEs) in multiple disciplines, I don’t think that this is the case in the modern era. While there are some in our world who can be considered polymaths, to become an SME in multiple fields is a difficult task given the high degree of specialisation required. However, polyvalence (Gk/Lt: multiple strengths)[ref]Seriously, who combines Greek and Latin word roots[/ref] I think is possible, and being well-versed, but perhaps not SME level, in a variety of topics, aids in setting down Maslow’s hammer. Rather the broad training helps with being able to diversify the toolset used, and helps scholars and thinkers alike to bring a wider variety of tools to the task. This helps with not using the wrong tool for the job. Academically speaking this is interdisciplinary work, but realistically it is all about not using a hammer where a screwdriver is ideal.

    How do you find Maslow’s hammer working in your thinking? Tell me below.

  • Is the World Going to Hell in a Hand Basket? – the Negativity Bias

    Is the World Going to Hell in a Hand Basket? – the Negativity Bias

    Brief contact with a cockroach will usually render a delicious meal inedible. [But] the inverse phenome-non—rendering a pile of cockroaches on a platter edible by contact with one’s favorite food—is unheard of.[ref]Rozin, Paul, and Edward B. Royzman. ‘Negativity Bias, Negativity Dominance, and Contagion’. Personality and Social Psychology Review 5/4 (2001): 296–320.[/ref]

    Why does our media constantly report on the negative aspects of our society, with only a short snippet of positive media? Why is it that people spouting polemic and invectives get so much more airtime, than those building constructive arguments? Undoubtedly some of this is due to the prevalence of polemics and the problem of evil in the world, there is also a cognitive bias lurking beneath the surface: the negativity bias. This bias describes our human predisposition to paying more attention to elements that have an overall negative nature, to those that have a positive nature to them.

    The negativity bias, also known as the positive-negative asymmetry effect, has been regularly observed in a wide range of studies, and is intrinsically felt by many people. For example the loss of a pet or more significantly a relative will resonate with many individuals for a significant period. Or for a child at school being picked last on a sports team, or failing a test, will have a higher and longer lasting currency than all the times that they were picked first or excelled. Psychologically the effects of negative events stick with us longer than the positive ones.[ref]Baumeister, Roy F.; Finkenauer, Catrin; Vohs, Kathleen D. (2001). Bad is stronger than good. Review of General Psychology 5 (4): 323–370. [/ref] Furthermore, in a range of psychological studies, even when artificially controlling for frequency and occurrence the majority of people pay more attention to negative than positive events, and are even internally motivated to avoid negative projection rather than emphasise positive projection.

    'Typical media bias. First they label the wolf 'the big BAD wolf' then they only give Little Red Riding Hood's point of view.'So is this just the old adage of the world going to hell in a hand basket? Are things just getting ever worse, and eventually the situation will be completely untenable, with the world imploding in on itself in a sea of negativity? Well, not quite, or so the research says. Although it is obvious that negative events occur, and seemingly regularly, the overall tenor of the world is that these events are in decline. As Steven Pinker found in his research on violence in The Better Angels of Our Nature we may actually be living in the most peaceful era ever.[ref]Pinker, Steven. The Better Angels of Our Nature: Why Violence Has Declined. 1st Edition edition. New York: Viking Adult, 2011.[/ref] While I’m usually quite suspicious of Whig-type historiography, it appears that the research in Pinkers book stacks up. Why then is it that we feel that the world is just getting worse? A significant portion of it is likely to do with our negativity bias.

    With our inbuilt negativity bias pushing us towards paying more attention to negative events than positive, is is understandable that the variety of media tends to report on negative events. Especially those outlets that are dependent on sales or clicks to pay the bills. In turn this feeds our negativity bias, and so the cycle is perpetuated. However, it is important to note that this is not a chicken-and-the-egg origins scenario. From the plethora of studies it is quite clear that the origins for the cycle lie within our cognitive biases, and are then subsequently fed and reinforced.

    half-full-half-emptyThe same effect occurs when people are asked to make decisions based on the evidence provided. Many will decide on a course of action, or a held belief, based on the negative arguments put forward, rather than the positive arguments. We humans tend to make decisions based on what we may lose, rather than what we can gain.[ref]Rozin, Paul, and Edward B. Royzman. ‘Negativity Bias, Negativity Dominance, and Contagion’. Personality and Social Psychology Review 5/4 (2001): 296–320.
    In a possible confirmation of the FUTON bias: https://sites.sas.upenn.edu/rozin/files/negbias198pspr2001pap.pdf[/ref]

    What does this have to do with academia and the public square, other than something that is cool to know about. Well, one of the primary applications is to do with how positions are argued. While proving the null hypothesis probably wont suffice for formal arguments, it certainly suffices for the public square. What is more, because of the negativity bias, these negative arguments tend to be taken on board more than the same argument phrased positively. As an example I was recently reading an article posted to that erudite news source: Facebook. Essentially the author of the article was arguing positively for the existence of a town from archaeological and historical evidence, and quite academically convincingly too might I add. However, from the comments on Facebook it was evident that the take home aspect of many readers was that the author was arguing negatively, against the proposed thesis that the archaeological evidence pointed to the non-existence of the town. Many readers completely failed to acknowledge the positive arguments, even when quizzed.

    So what for us? Well in academia and public discourse there is a tendency to provide solely constructive arguments, as within the scientific method it is difficult to prove the null hypothesis (don’t get me started on Bayesian theory and NHST again). However, for reception of that discourse we need to be aware that negatively framed arguments tend to be carried with more weight. Sadly then negative arguments must be engaged with, rather than merely dismissed. Of course the difficulty is how to engage with them without merely being negative in response, and on that question I’m very sad to say it is highly contextual.

    How do you deal with negative arguments, and how are you aware of the negativity bias at play in your own work? Tell me below, in the comments.

  • If something is said often enough it must be true! – Availability Bias or the Illusory-Truth Effect

    If something is said often enough it must be true! – Availability Bias or the Illusory-Truth Effect

    There is no place like home. There is no place like home. There is no place like home. There is no place like home.

    yellloudWhile repeating the ending line to the Wizard of Oz may have worked well enough for Dorothy, it doesn’t work in the same way for us. Or does it? Sometimes it appears that people treat claims as true, or at least more valid, when they hear them regularly. One can easily find evidence of this when looking around on the internet. Through the ease of publication and promulgation in the modern era of social media, it is relatively easy for inaccuracies, misnomers and blatant lies to spread like wildfire. But why is it that even when they are obviously false, or resoundly corrected, that many people still believe them to be true? Well it seems that there is some truth to the old adage ‘if it is said often enough it becomes true.’
    Welcome to Cognitive Bias Wednesday — today looking at the Availability Bias or the Illusory-Truth Effect.

    Although hearsay, scuttlebutt and old-wives tales may account for some of the repeated claim evidence, it appears that the cognitive rabbit hole goes a bit deeper than this. In 1977 Hasher et. al. ran a study looking at how repetition of information affected the believability of it.[ref]Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16, 107- 112.[/ref] Surprisingly they found that not only did participants respond more confidently seeing the repeated information, the usual symptom of a Remember-Know task, but they also rated the validity higher, than the novel information. Their abstract for the piece highlights their findings succinctly:

    Subjects rated how certain they were that each of 60 statements was true or false. The statements were sampled from areas of knowledge including politics, sports, and the arts, and were plausible but unlikely to be specifically known by most college students. Subjects gave ratings on three successive occasions at 2-week intervals. Embedded in the list were a critical set of statements that were either repeated across the sessions or were not repeated. For both true and false statements, there was a significant increase in the validity judgments for the repeated statements and no change in the validity judgments for the nonrepeated statements. Frequency of occurrence is apparently a criterion used to establish the referential validity of plausible statements.[ref]emph. mine[/ref]

    morala-in-politica-if-you-repeat-a-lie-often-enoughOf the greatest surprise here is that final sentence: ‘frequency of occurrence is a criterion … [for the] validity of plausible statements.’ In other words they found that the more seemingly plausible material was repeated, the more it was believed as factual. To translate this into the modern social media era, take the seemingly ridiculous, but vaguely plausible, claims regarding ‘Chemtrails,’ fluoridated water or the ideology of ISIS. While the claims bear little to no factual basis, if repeated often enough they begin to attain an air of social plausibility. The facts surrounding the matters at hand have not changed one iota, but the more it is shared and re-shared, the more it is believed and repeated as mantra as it appears repeatedly on people’s Facebook walls and Twitter feeds.

    Indeed, the more these claims get repeated and shared, the more likely it is to cause an availability cascade.[ref]Kuran, Timur and Sunstein, Cass R., Availability Cascades and Risk Regulation. Stanford Law Review, Vol. 51, No. 4, 1999; U of Chicago, Public Law Working Paper No. 181; U of Chicago Law & Economics, Olin Working Paper No. 384. Available at SSRN: http://ssrn.com/abstract=138144[/ref] The availability cascade is effectively the result of a particular ‘factoid’ or ‘unfactoid’ going viral, and gaining significant social plausibility by the availability bias. The degree of sensationalism and clickbait present in our modern news media is just one example of this type of cascade.

    Furthermore, if you are in an academic field like I am, don’t get all high and mighty over not falling prey to the availability bias. We have our own two special instances of it: the NAA and FUTON biases.  The NAA bias represents the ‘No Abstract Available’ condition, where articles have reduced citation and engagement rates if the abstract for the article is not publicly available. The FUTON bias is the reverse and finds that where the material is available as ‘Full Text On Net,’ i.e. open publishing or similar, the article is engaged with at a higher rate. As one Lancet study observed this leads to ‘concentrat[ing] on research published in journals that are available as full text on the internet, and ignor[ing] relevant studies that are not available in full text.’[ref] Wentz, R. (2002). “Visibility of research: FUTON bias”. The Lancet 360 (9341): 1256–1256.

    As a side note, this is one reason why many of the articles I refer to are behind pay walls. I deliberately choose non OA research, so perhaps I’m exhibiting the reverse FUTON bias.[/ref]

    imagenoise_SIGNALmlab2What does this mean then? Well simply put it’s a question of Signal-to-Noise ratio.[ref]Yes! Finally some vague reference to my telecoms & radio background.[/ref] If articles that propose some vague theory that sounds plausible but goes against the academic evidence are left to fester and be shared around, then they gain a veneer of plausibility. One such category of articles in my current field (theology) are the repeated ‘Jesus myth’ pieces that come out every Christmas, with predictable regularity. Such as this one from last year: http://theconversation.com/weighing-up-the-evidence-for-the-historical-jesus-35319 To maintain an appropriate SNR there needs to be appropriate responses to such articles, such as this one from John Dickson: http://www.abc.net.au/religion/articles/2014/12/24/4154120.htm Or take the claims of various health related article that are shared regularly around Facebook, these too need robust counter claims. Because unfortunately the ‘live and let live’ or the ‘sweep it under the rug and let it die’ approaches only allow the viewpoints to fester, and with enough availability (shares and reshares) they become plausible in the public sphere and consciousness.

    So in short, even though simply repeating things ad infinitum or just yelling them louder should not work in the public square, it unfortunately does affect opinion and plausibility. As annoying, distasteful and time consuming as it may be, inaccurate claims need to be refuted, and done so on such a medium that it allows for such public availability. To simply ignore them reduces the signal-to-noise ratio and reinforces the availability bias.

    Comment below and let me know of your applications of the availability bias, and even what other biases you would like me to look at. For those that have asked the Dunning-Kreuger effect is coming up soon.

  • Why are some people just so obstinate? – Bayes Theorem & The Backfire Effect

    Why are some people just so obstinate? – Bayes Theorem & The Backfire Effect

    What do Flat-earthers, Anti-vaxxers, 9/11-conspiracists, Jesus-mythicists and Obama citizenship deniers have in common?

    Well, subconsciously at least, they all display a contempt for Bayes’ theorem, and arguably they strongly manifest the ‘Backfire effect.’ Welcome to another instalment of Cognitive Bias Wednesday, today looking at the ‘backfire effect,’ with a prelude on Bayes’ theorem.

    statsWhy is it that even when presented with all the data and information, some people refuse to modify their beliefs? That even with all the weight of evidence that the world is round, or that vaccines don’t cause autism, that they refuse to budge in their beliefs, and even seem to become more set in their ways. While some of this behaviour is certainly conscious, at least some is also the product of a cognitive bias operating behind the scenes: the ‘backfire effect.’ This is also why most of the arguments that occur on the internet are relatively fruitless. However, before we get to the cognitive bias, it is worth having a brief journey through a neat cognitive feature, the theory of Bayesian inference.

    bayes-theorem-equationThomas Bayes was a 18th century Presbyterian minister, philosopher, and statistician; and while he published works in both theology and mathematics, he is best known for his contribution to statistics, posthumously. His work on what would eventually become known as Bayesian probability theorem was only published after his death, and the impact of it would be completely unknown to him. While Bayesian modelling and statistics are applicable in a wide spectrum of fields and problems, from memory theory and list length effects that my previous lab worked on (MaLL),[ref]Dennis, Simon, Michael D. Lee, and Angela Kinnell. “Bayesian Analysis of Recognition Memory: The Case of the List-Length Effect.” Journal of Memory & Language 59, no. 3 (2008): 361–76.[/ref] through to Richard Carrier’s application to historiography,[ref]Carrier, Richard. Proving History: Bayes’s Theorem and the Quest for the Historical Jesus. First Edition edition. Amherst, N.Y: Prometheus Books, 2012.[/ref] and many more (just don’t get me started on the null hypothesis significance testing debate).[ref]Lee, Michael D., and Eric-Jan Wagenmakers. “Bayesian Statistical Inference in Psychology: Comment on Trafimow (2003).” Psychological Review 112, no. 3 (July 2005): 662–68; discussion 669–74. doi:10.1037/0033-295X.112.3.662.[/ref] The key factor for this investigation is in the Bayesian logic applied to the belief-revision loop. In lay terms Bayesian statistics can be used to predict how much someone will believe in a proposition when presented with a certain evidence set.

    lutececointossTake for example the good old coin toss test, suppose you have a coin with one side heads and the other tails. Logic and the laws of probability would indicate that it should be a 50/50 chance of being heads. But what happens if you flip a coin 5 times and get 5 heads in a row, well statistically speaking it is still a 50-50 chance, even though the probability of getting a long consecutive run trends with 2(n-1). What about if you get 92 heads in a row,[ref]Rosencrantz and Guildenstern Are Dead: Act 1[/ref] or 122 heads,[ref]Lutece Twins, Bioshock: Infinite[/ref] do the odds change then? Probability and statistics give us a clear no, it is still 50-50 no matter how big n is. However, if you ask gamblers at a casino, or for that matter most people on the street, you will get a startling response. Many respondents will say that as it is a 50-50 probability the chance of the next coin toss being tails increases to even out the overall trend. Why? Well it is a bit of a faulty belief-revision loop, and this trend is able to be predicted by Bayes’ Theorem. Using Bayesian inference and applying it to epistemology we can predict the modification of the belief loop and see that degrees of belief in the outcome of a coin toss will rise and fall depending on the results, even though the statistics remains the same. Furthermore, these modifications are overwhelmingly conservative in most people, and this should give us pause for thought when we find evidence that challenges our beliefs.

    But what does this have to do with the backfire effect? I hear you ask. Well the backfire effect is essentially where the Bayesian inference model of the belief-revision loop fails, and fails badly. Normally when people are presented with information that challenges their beliefs and presuppositions they engage in the Bayesian belief revision loop as above, and slowly change (even if slower than you would think). However, when testing how people respond to correction of misinformation Nyhan and Reifler found that, in some cases, rather than modifying their beliefs to accommodate or fit with the information that they have received, they instead clung to their beliefs more strongly than before.[ref]Nyhan, Brendan, and Jason Reifler. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32, no. 2 (March 30, 2010): 303–30. doi:10.1007/s11109-010-9112-2.[/ref]  Essentially in their tests the presence of correcting material for political misconceptions served to strengthen the misconception, rather than modify it. They dubbed this the ‘Backfire Effect.’

    Now this isn’t displayed by everyone in the populace, although I would argue that it works within our subconscious all the time. Some early research shows that the backfire effect commonly raises its head when the matters are of emotive salience. So even though some of the more amusing incidences of the backfire effect that are commonly highlighted involve people sharing satirical news stories from The Onion or Backburner as if they were real news articles, others are less benign. Indeed, for almost every amusing incidence of people not checking their Bayesian revision loops and falling prey to the backfire effect, there are just as many where people are strongly reinforced in their faulty beliefs on items that matter. One of the notable items recently has been the issue of vaccination, where I have seen several acquaintances strongly hold to the proven faulty and fraudulent research that ‘linked’ vaccines with autism. Here the overwhelming body of evidence finds no link between the two, and yet they strenuously hold to the link.

    arguing-internetSo what can be done about it? Well this blog post is one useful step. Being aware of the backfire effect should help us evaluate our own belief systems when we are challenged with contradictory evidence. After all we are just as susceptible to the backfire effect as any other human being. So we should be evaluating ourselves and our own arguments and beliefs, and seeing where our Bayesian inference leads us, with the humility that comes from the knowledge of our own cognitive biases, and the fact that we might be wrong. However, it should also help us to sympathise with those who we think are displaying the backfire effect, and hopefully help us to contextualise and relate in such a way that defuses some of the barriers that trigger the backfire effect.

    Please weigh in on the comments as to what you thought about my explanation of Bayesian inference and the backfire effect. Also let me know what other cognitive biases you would like to see covered.

  • Why is everyone else so incompetent? Attribution Errors – Bias Wednesday

    Why is everyone else so incompetent? Attribution Errors – Bias Wednesday

    ‘Why did that person just run that red light? They obviously don’t know how to drive.’

    We hear it all the time, the tendency to attribute malice or incompetence to another individual or group, when if it was us doing the action it would be merely an accident: ‘I just didn’t see it.’ Welcome to the second edition of Cognitive Bias Wednesday. While there are many reasons for this tendency, a lot of them stem from a suite of cognitive bias known as Attribution Errors, with the Fundamental Attribution Error (FAE) at their root. Simply put it is the tendency for people to emphasise internal decisions and characteristics for other’s negative actions, while emphasising external factors for their own negative actions. FAE pops its head up in a wide variety of situations, and we probably unconsciously express it every day, it is one of the most powerful decision rationalisation biases.

    peanuts

    One classic study of the FAE looked at drinking rates amongst adolescent males, and took two observations: firstly, how much an individual drank, and secondly whether they thought that their peers drank more, the same, or less than them.[ref]Segrist, Dan J., Kevin J. Corcoran, Mary Kay Jordan-Fleming, and Paul Rose. “Yeah, I Drink … but Not as Much as Other Guys: The Majority Fallacy among Male Adolescents.” North American Journal of Psychology 9, no. 2 (June 1, 2007): 307.[/ref] While actual drinking rates across the group averaged similarly, the attribution of drinking rates amongst the peers was strongly externally inflated. As seen in the title of the study ‘I drink … but not as much as other guys.’ While not attributing incompetence or malice, the negative perception of drinking rates is externally magnified and internally denied. This is despite the drinking rates remaining relatively steady across the cohort. We have the tendency to attribute our own negative characteristics externally, and attribute other’s negative characteristics to their internal space.

    1325563668658_882818Furthermore this is only exacerbated when it is brought into a social setting. While the nature of the FAE is powerful on an individual level it is stronger again amongst groups. The expanded bias, creatively named Group Attribution Error, sees the attributes of the out-group as being defined by individual members of that group. We met this bias briefly in the post a couple of weeks ago on Cyclists vs Motorists and Intergroup biases. This is further expanded again with Pettigrew’s, again creatively named, Ultimate Attribution Error (one must wonder where to go after this). While FAE and GAE look at the ascription to external and out-groups primarily and discard most internal and in-group data, Ultimate Attribution Error seeks to not only explain the demonisation of out-group negative actions, but explain the dismissal of out-group positive behaviours. Interestingly many of the studies that support Pettigrew’s Ultimate Attribution Error look at religio-cultural groups as their case studies, such as the study by Taylor and Jaggi (1974), or later studies on FAE/UAE and suicide bombing (Altran, 2003).

    Excursus: One brief and curious aside is that according to one study Protestants appear to be more internally focused, lower rates of FAE/GAE, in comparison with Catholics who are generally externally focused, with higher rates of FAE/GAE.[ref]Li, Yexin Jessica, Kathryn A. Johnson, Adam B. Cohen, Melissa J. Williams, Eric D. Knowles, and Zhansheng Chen. “Fundamental(ist) Attribution Error: Protestants Are Dispositionally Focused.” Journal of Personality and Social Psychology 102, no. 2 (February 2012): 281–90. doi:10.1037/a0026294.[/ref] The authors theorise that this is due to an innate greater emphasis on the soul within Protestantism. I will have to look more into their article, and perhaps post on it later.

    Excursuses aside, how do these attribution errors affect day to day research and study? One of the ways I think they powerfully affect good academic research and debating is when it comes to the assignment of scholarly labels within academia. I sometimes have students come to me asking if I can point them towards material that is ‘more liberal’ (in the theological sense). Now while I applaud students for wanting to seek alternative views to their own, the level of out-group attribution of ‘liberalism’ commonly leads to a flimsy disagreement with the argument at hand. Commonly it goes like ‘I disagree with this argument because its a liberal argument, and therefore…’ Conversely it works in the opposite fashion ‘I agree with this [flimsy] argument, because we are part of the same group.’ A similar bias is found in several recent articles on the religion and science interface. The argument there commonly goes ‘Religion introduces bias, therefore no confessionally religious people can debate this topic.’ The attribution of innate bias to an out-group, in the same fashion that incompetence is attributed to an observed poor driver, is at play here.

    Being aware of our tendency to attribute negative internal characteristics to an out-group participant should help us assess things better in two ways. Firstly it should help us to assess arguments and evidence on the grounds that they are presented, not on the group that they are presented from. In short play the game not the person or group. Stick to the argument and evidence that is set forth and assess it on those grounds, whether you agree or disagree with the person or group who is promulgating it. Secondly, it should help us see blind spots within our own research and work. If we are constantly assessing others based on the same qualities, then we are more likely to be critical with our own research based on the arguments and evidence, rather than letting it float on in-group support.

    Attribution errors can be extremely hard to overcome, but knowing about them certainly helps. Hope you have enjoyed this Cognitive Bias Wednesday, as usual weigh in below on the comments!

  • Cognitive Biases: Laptops vs Paper – a useful case study on how to remember things (oh and biases)

    Cognitive Biases: Laptops vs Paper – a useful case study on how to remember things (oh and biases)

    Before we get into the Wednesday series on cognitive biases and fallacies in full swing I thought it would be good to look at a simple case study that not only applies to how we fall into biases unconsciously, but also teaches us a little about how we process information. For a little while now there have been a series of articles floating around the web and popping up from time to time based on the 2014 study by Mueller & Oppenheimer on memory retention with long hand vs laptop note taking. [ref]Mueller, Pam A., and Daniel M. Oppenheimer. “The Pen Is Mightier Than the Keyboard Advantages of Longhand Over Laptop Note Taking.” Psychological Science, April 23, 2014, 0956797614524581. doi:10.1177/0956797614524581.[/ref]

    It’s quite a salient topic to look at with the focus on appropriate methods of pedagogy and learning in our modern society, and the sudden and sharp uptake of computers in the last two decades; thanks to Gordon Moore. Now the majority of these articles focus on the study setup by Mueller and Oppenheimer which looks at memory retention from a variety of TED talks when students were asked to take notes in two different modes: handwriting, and laptop note taking. That study found that students performed better at recognition tasks when handwriting rather than laptop note taking. From this the majority of the articles I have read simply conclude that handwriting is superior to laptop note taking, that in the laptops vs paper debate traditional methods come up trumps.

    But is it really? Well before we get into the psychology behind learning and memory, it is worth noting a simple cognitive bias at play here. Confirmation bias. Confirmation bias is a simple bias of taking note of the items or conclusions that fit our existing pattern of beliefs. Simply put the author of most of these articles sub-consciously eliminated the information that disagreed with their presupposition that using laptops in a classroom is detrimental to learning. Notably they ignored the link between laptop use and verbatim transcription, and the corresponding handwriting and synthesis based non-transcription.

    This is just a simple example of the problem with cognitive biases. We simply have a lot of them, and they are excellent at blinding us to alternative data and explanations that challenge our presuppositions. Furthermore there is no malice behind the biases in many cases, which makes it harder to detect in a self-reflective manner. However, being aware of our presuppositions and our predisposition to cognitive biases significantly helps in identifying where our biases are affecting our reasoning and thinking. That is the main reason behind this Wednesday series, if we know more about some of the more common biases it should help us internally defeat them.

    dilbert-confirmation-bias

    Biases aside and back to learning theory, as from the interview in this article: http://www.theatlantic.com/technology/archive/2014/05/to-remember-a-lecture-better-take-notes-by-hand/361478/ where thankfully the reporter has covered the whole of the study, Mueller reflects:

    “We don’t write longhand as fast as we type these days, but people who were typing just tended to transcribe large parts of lecture content verbatim,… the people who were taking notes on the laptops don’t have to be judicious in what they write down.”

    This reflection shows the underlying cognitive working behind the study design of typing vs handwriting. Indeed the claim of better memory retention from handwritten versus typed comes from the level of cognitive engagement, as in thinking and processing, in the memory task. If you are cognitively engaged, such as you are when synthesising material for a paper, then the mode of recording has little consequence (so long as you record something to be able to find it again several months down the track).

    There have been some studies done with low- and high-cognitive load tasks, along with possible low-cognitive load distraction tasks (flipping coins etc) which show its the load of the task that affects retention. Ultimately if you are cognitively disengaged, such as simply transcribing notes for a lecture, then the ‘harder’ cognitive task of handwriting will generally yield better results. [ref]Cf. Piolat et al, 2012; Makany et al, 2008 for cog load; and Schoen, 2012 for contra Mueller & Oppenheimer[/ref]

    wpid-Photo-20141004215054I generally recommend that people take notes in a ‘cognitively difficult’ fashion. What constitutes cognitively difficult varies per person as well, for some it may involve reading around the subject before and after class, while for others it may be formulating interesting questions even if they are not asked in class. While for students who are learning in a non-native language it may actually mean typing verbatim, as the very act of thinking in a non-native language is a hard cognitive task. Indeed some of the students I had last year did this, and subsequently took photos of the whiteboard after class to supplement their notes. As per this amusing anecdote on James McGrath’s blog here. This probably wouldn’t be a useful task for many people with English as a native language, but for them working across a language barrier it helped with both retention and accuracy.

    Realistically for long term memory retention the cognitive load should be high, and the material should be reviewed regularly. I recommend having a high cognitive engagement, even if it is via typing, but review after 24hours and then 3 days and 7 days. Furthermore if the task is able to be used in a synthesis fashion, by perhaps answering questions or writing a personal paper or synopsis on the lecture at hand, then this will reinforce the cognitive loading of the task as well. As from the Mueller & Oppenheimer study abstract the ultimate difference appears to be the act of ‘processing information and reframing it in their own words’ rather than the physical mechanism. So take notes well, and also take note of your cognitive biases.

    Some tools for note taking will be coming up in future Monday posts, and look forward to more cognitive biases on Wednesdays. Tell me what your preferred note taking method is in the comments.