Daniel Lemire's blog

, 12 min read

Science and Technology links (March 2nd, 2018)

10 thoughts on “Science and Technology links (March 2nd, 2018)”

  1. Guy Tremblay says:

    Concerning “Psychology as a field is a big trouble.”

    Have you read “The Seven Deadly Sins of Psychology A Manifesto for Reforming the Culture of Scientific Practice” by Chris Chambers. Very interesting… and not only for psychologists!

    1. I haven’t read this manifesto. I’ll hunt it down.

  2. While it is undeniably true that people who are obese have higher risks of disease and on average shorter life-spans, has this been separated from lifestyle factors such as exercise or poor eating habits (processed vs. whole food etc)? I suspect that the risks are different for the obese poor than for the obese rich as well.

    There are also the increased dangers associated with yo-yo dieting (that have been documented since 1988 if not earlier). Are the subjects in question subjecting themselves to this factor and thus placing themselves at increased risk?

    Anywhere in science that people draw a line between an observation and an outcome, I become suspicious.

  3. Béatrice says:

    Hi Daniel,
    What do you mean precisely by saying that psychology as a field is in big trouble? What’s the problem with the article you are quoting?

    1. Highly cited and widely reported results in psychology do not hold up under reproduction.

  4. Béatrice says:

    Psychology is composed of several disciplinary sub-fields, among which is the social psychology, concerned by the article that you quote. In each field there are competing theories, giving rise to experimental research aimed at empirically validating the theory in question. The method of “priming” used in the article cited, is a widely used method in cognitive psychology, social psychology, psycholinguistics, cultural psychology, etc.

    In the field of social psychology, there is a significant body of findings on social priming, suggesting that social knowledge (eg.: stereotypes) is spontaneously and immediately activated in memory. This automatically activated knowledge forms and influences people impressions, judgments, feelings … (see Ferguson and Bargh, 2004).

    The research of Djiksterhuis and van Knippenberg (1998), which failed to be replicated in the large-scale replication project that you report, tested a bold assumption that automatic and unconscious knowledge activation can influence complex reasoning processes (hence higher psychological processes). Their results would have allowed bringing a “brick” to the broader theory according to which complex behavior can be automatically induced or driven by the knowledge incidentally activated during perception. Simply stated, these researchers hypothesized that priming effect (activation of complex social constructs) might extend beyond judgment to behavior.

    But, contrary to what you seem to suggest (that this is a widely validated result), even from this theoretical approach, their hypothesis was “daring,” and conflict with other theories of complex cognitive processes. That is why Djiksterhuis and van Knippenberg (1998) results were surprising and quickly “tested” by other researchers, who either succeeded in reproducing it (e.g. Bry et al, 2008), refine the understanding of the underlying mechanisms (e.g. LeBeouf and Estes, 2004), or failed to replicate it (e.g. Shanks et al, 2013). Indeed, even within the framework of their underlying theory, it is accepted that the mechanisms behind social priming phenomena are multiple and that it is necessary to break down different factors in the experiments better so that the results can enrich, refine, and even reconstruct the underlying theory.

    These considerations lead me to stress the difference between studies that aim at “direct” replication of an empirical result and studies aimed at the replication of a mechanism.

    Here too, there are different schools of thought about methodologies and replication goals. This issue has been widely discussed in leading journals in psychology, and various replication initiatives have been put in place (an excellent presentation is that of Stroebe and Strack (2014) which also takes the experience of Djiksterhuis and van Knippenberg (1998) as an example).

    The study you are quoting is part of such an initiative, and it testifies, in my opinion, not the weakness of the scientific approach in psychology, but its strength, since it proves that psychology can self-correct by putting in place proper mechanisms to improve.

    I have not been able to access the full text of this study, but following the summary, reported studies aimed at direct replication and they invalidated the results of Djiksterhuis and van Knippenberg (1998). So, their results allow to advance in social priming research and claim for greater focus on the mechanisms that underlie the apparent potential independence of conscious intention and actual behavior, as argued by Ferguson and Bargh (2004) and Wheeler and DeMaree (2009). Moreover, it is a significant contribution because it brings arguments to the proponents of theories that value the intentional and conscious control of complex behavior. To conclude, results of this large-scale replication are “problematic” for one theory, but not for another (although the question of direct replication vs. conceptual replication would undoubtedly be raised again). Until then, nothing more normal in Popperian science 😉

    1. (…) it proves that psychology can self-correct by putting in place proper mechanisms to improve.

      The study I quote is pre-registered and multi-lab. This means that before you run the experiments, you publish your methodology, your hypothesis, and your statistical tests. Then several independent laboratories run the experiment, as per the registered methodology, and then the results are published.

      If this were a common practice in psychology, then I would agree with you that one could be hopeful about the field.

      Setting aside the multi-lab part, how common is pre-registration in psychology? Can you point me to a database of pre-registered experiments?

  5. Béatrice says:

    There is an interesting ” research digest” that I quote :

    A full list of the findings that the researchers attempted to replicate can be found on the Reproducibility Project website (as can all the data and replication analyses). This may sound like a disappointing day for psychology, but in fact really the opposite is true. Through the Reproducibility Project, psychology and psychologists are blazing a trail, helping shed light on a problem that afflicts all of science, not just psychology. The Project, which was backed by the Association for Psychological Science (publisher of the journal Psychological Science), is a model of constructive collaboration showing how original authors and the authors of replication attempts can work together to further their field. In fact, some investigators on the Project were in the position of being both an original author and a replication researcher. “The present results suggest there is room to improve reproducibility in psychology,” the authors of the Reproducibility Project concluded. But they added: “Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should” – that is, being constantly sceptical of its own explanatory claims and striving for improvement.

    1. Béatrice says:

      See also :
      Registered Replication Reports
      Multi-lab, high-quality replications of important experiments in psychological science along with comments by the authors of the original studies.
      https://www.psychologicalscience.org/publications/replication

      1. This may sound like a disappointing day for psychology, but in fact really the opposite is true.

        There is a Faustian reading of this sentence.

        Consider an analogy…

        “It may seem like a disappointing day for politics when politicians keep being caught in corruption charges, but the opposite is true.”

        I understand where they are coming from. The fact remains that if you pick a highly cited psychology article at random, even if the journal article ended up getting lots of publicity and a TED talk, then chances are good that the work is not reproducible. That is, it is wrong.

        You may choose to feel good about the fact that, in a handful of cases, two decades later, people will try to reproduce the work and be able to publish a failure-to-replicate study. That it makes people feel good is an indication of how low they put the bar.

        As in my corrupted politician analogy, you can choose to feel good about the fact that some of the politicians get caught, even if it takes decades. That’s certainly better than some alternatives. But you still don’t have honesty!

        There are thousands upon thousands of new psychology studies published each year. What fraction of them will be tested during a reproduction study? Probably much less than 1%.

        What are these researchers doing? The obvious priority is to focus on reproduction. That is not happening. Why not?

        Multi-lab, high-quality replications of important experiments in psychological science

        The link https://www.psychologicalscience.org/publications/replication/ongoing-projects is interesting.

        You have, at best, 4 ongoing replication studies going on right now. I say at best because I cannot find complete documentation for any of these registered studies.

        Then you have 5 published replication reports. Five.

        If this site is anything close to a good match for reality, it is deeply depressing.