Daniel Lemire's blog

, 16 min read

Can Science be wrong? You bet!

16 thoughts on “Can Science be wrong? You bet!”

  1. I can’t help but take the bait, Daniel.

    “A large fraction of AI researchers have convinced themselves that intelligence must emerge from Prolog-like reasoning engines.”

    Well, maybe that was true for ~ 10 years around the heyday of Lisp Machines (1980-1990), perhaps. But today, Good Old Fashioned AI is all but dead. Everything is probabilistic models, stochastic algorithms etc. No one (lamentably) seems to work in the area of “logics for AI” – which, it seems to me still deserves study (viz. non-monotonic logics, default reasoning etc.), though not because that’s necessarily the best way to build intelligent machines, but because logic, complexity theory, automated theorem provers etc. are all valid (scientific) disciplines of research for which criteria of advance, progress, explanatory power, predictive power etc. are quite as applicable as in any other field.

  2. Quoted for truth.

    If you are willing to look beside mainstream, you may find some interesting alternative lines of thought. Here are some examples:

    * Economists: Take a look at physics, chaos theory and fractals. Fractal theory won’t allow you to predict when some big change is going to happen. It gives you a framework that helps to explain what has happened, however. See “The Black Swan” by Taleb for reference.

    * AI: I guess the interesting research paths lie within self-organizing and agent based systems. It’s going to be interesting to see if biotech and AI are going to converge somehow.

    * Software engineering: Yeah. I agree that just “waterfall” is a bit troublesome. I tend to see various processes as a spectrum ranging from waterfall (controlled, prescriptive) to agile (compromise?) and lean (demand driven).

    * Physics: I guess it would be nice to have a “theory of everything”. I suppose that’s kind of unattainable considering resources are limited (it’s impossible to build big enough experiments).

  3. You seem to mix two types of problems in the discussion of fraud.

    What is commonly perceived is fraud is the “false positive”. A scientific claim that gets published, which contains false data, sloppy analysis, and at the end, incorrect conclusions.

    We also have the “false negatives”, the failure to identify the false scientific claims. The whole discussion about the unwillingness to challenge the dominant scientific paradigm falls in this category. We see some results and do not perform research to prove them wrong.

    I believe that incorrect results (including the fraudulent ones) will always be present. However, if such results become prominent and widely used, they will be uncovered. The case of fraud for Hwang in cloning was discovered exactly for this reason. Same thing for Schön and molecular transistors. People could not verify their highly visible results that made them famous. So, depending on fraud to reach high status is a highly risky proposition. This is just due to the rivalry of other scientists to be the “correct” ones. The higher you climb in the research ladder, the more people will check and double check your results. Exposing an error in the work of a famous scientist will bring fame to the person that uncovered the problem. Whether the problem was due to fraud, or due to incorrect assumptions, or due to sloppiness, it does not matter. So the self correction does happen for the important developments.

    The fraudulent results that will not get uncovered are the ones that do not have any significant impact, and (often) do not challenge the established paradigm. So, yes, in a sense it is possible to have a perfectly fine career in science producing results that do not challenge the status quo, and at the end are not worth reading or replicating. The fact that whole research communities do not want to challenge their unimportant results is not so important, imho. The fact that nobody uses whatever they generate is the most severe punishment. For the financial models, you see the self-correction, as everyone now works to identify the problems with the prior models.

    It may take some time to reach the point of self-correction. It took several hundred years for Newtonian physics to be proven wrong. But it happens when the scientific results are just not “useful enough” anymore.

  4. Mike Stiber says:

    Does anyone really think that economics, AI, or software engineering are sciences? There may be elements of those areas that are actually science, but those are very small parts.

  5. @Stiber

    Does anyone really think that economics, AI, or software engineering are sciences?

    Isn’t research in these fields funding by the National Science Foundation in the USA?

  6. Mike Stiber says:

    Ah, like the definition of science fiction being what science fiction editors buy.

    But, for the most part, those fields don’t utilize the scientific method. And NSF funds all sorts of engineering, which most would agree isn’t science.

  7. @Daniel. This article in Wired is about the peer-review bias towards “surprising” results which generates a lot of unpublished “normal science” data. Germaine to this discussion:


  8. Marcel says:

    ….a nice ‘paper’ about social disasters in cosmology; another aspect of how it should not work!

  9. Paul says:

    There’s an economics saying I’ve always been fond of: “The market can stay irrational longer than you can stay solvent.” I believe that good science ultimately wins out over the bad stuff, but not always in a couple of lifetimes. Maybe String Theory will find its place next to Relativity, or maybe it’ll join Lamarckism and Phlogiston Theory, but sooner or later we’ll find evidence or a better theory. There’s plenty of bright minds thinking hard on the fundamental questions of physics.

    From a more practical standpoint, if we do want to speed the process up, perhaps a professional track needs to be set up specifically around confirmation of other results. We currently value original research highest: maybe a cadre of researchers evaluated solely on the work they do confirming other results, evaluating structural biases, etc. would balance the work done. I agree with earlier commenters that important results do get reproduced, but we could certainly extend that treatment down to the marginal results as well.

  10. Marcel says:

    @paul: At the moment there is not even a glimmer of hope that string ‘theory’ can find any place…because it is not testable at all…

  11. @Paul

    I’ll agree that String Theory has gotten more attention than is warranted, but it strikes me as premature to rule it out completely.

    I’m told that it is difficult to get a job in theoretical physics, in 2010, if your work is not aligned with string theory.

    The real problem is that people proposing alternatives are not being hired. They go work for financial firms on Wall Street instead.

    On what is known with great certainty, there should be little divergence between scientists. On what is pure speculation, there should be as much diversity as possible.

  12. @Paul

    As you move towards the highly engineered you lose outside voices, but have a much closer link to objective truths.

    I don’t think this needs to be true. Anyone with the right training can do research in Computer Science. And you can mostly test your work. You think algo. A is faster than algo. B? Just implement both of them and compare. (This is naive, but I just want to illustrate my point.)

    There are many fields that were initially closed to outsiders which are becoming very accessible. We don’t need to oppose verifiable truths and accessibility.

  13. Paul says:

    If matter is vibrations on strings, there should be other vibrational modes corresponding to higher energies: heavier copies of the known particles should appear in extremely high energy situations. I understand these would be many order of magnitudes higher energy than we can currently produce with particle accelerators and thus are, from a practical standpoint, untestable. But that’s an issue with our technology, not the theory itself: general relativity was true before we had the ability to make fine observations in distant gravitational fields.

    Additionally, even if it failed to predict something new, if we could derive all the laws of chemistry and physics from a particular string theory variant, that would be a very strong argument for it even if it wasn’t classically tested.

    I’ll agree that String Theory has gotten more attention than is warranted, but it strikes me as premature to rule it out completely. If the universe works in a way that isn’t particularly amenable to testing that’s its prerogative.

  14. Paul says:

    “more attention than is warranted” was perhaps an understatement: Pushing out valid dissenting views is an egregious attack on scientific progress.

    Actually, this leads me to believe science has a nice tendency to avoid the worst cases of field hijacking. Material Science requires big, expensive machinery. Thus only professional researchers can contribute. But its involved in producing real, physical, measurable quantities. Thus it’s hard to pull the field too far down the wrong path: Either you’re physically producing the desired materials or you’re not.

    Contrast with theoretical physics: As string theory demonstrates, it’s hard to evaluate these things. What’s dark energy? Are there particles that make up quarks? Ideas are overturned over very long time spans. Thus its more susceptible to these sorts of group-think and high-jacking. But to balance that, anybody can think about these problems. Einstein discovered special relativity as a patent clerk. As you move towards the esoteric, you get more contributions from outsiders. As you move towards the highly engineered you lose outside voices, but have a much closer link to objective truths.

  15. Paul says:

    Computer Science is a weird field, I’m not surprised it doesn’t fall into that continuum. It isn’t cleanly Math, Engineering or Science, but some combination thereof.

    And I agree that fields tend towards openness: a microscope or a telescope are very affordable but were once exceedingly rare.

    I wasn’t promoting the idea that we should strive to push fields to one extreme or the other, only that if you line the two up on a pair of axises, you’d see very few in the inaccessible, difficult to verify section. Maybe some of the social sciences, where experiments involve mass interviews or other experiments on large groups?

  16. Devil's advocate says:

    Hé hé, worse that wrong science can be meaningless