Daniel Lemire's blog

, 10 min read

Biology and computing are more alike than you think…

10 thoughts on “Biology and computing are more alike than you think…”

  1. Thierry Lhôte says:

    Taming complexity or not understanding it.

    To a certain level complexity is not absorbable by a human mind, and does it care really ?

    If you had to gamble your life on a game of Go played by next Alpha Go vs next Go World Champion, would you risk it ?

    And what is the impact on the decision of the human guy if you tell him that the complexity of the play of Alpha Go is vastly superior to its conceptors ? He could interpret that as less certainty or more
    certainty. It works both ways in our understanding of life.

    Secondly, the mapping of Chess is pretty certain one day by regressing from ending positions to the first ones.
    Now if you did possess the entire of tree of Chess game on a hard drive, you could not make use of it to understand what is the game of Chess, its subtleties and intricacies.
    Even with a small amount of pieces, say 4. There is no patterns discernable from the point of view of a human mind to reproduce the win in competition, because the solution is too vast and concrete.
    So imagine with 16 pieces on board…

    You can compare it, in a way, with the amount of information we can get in our daily lives, which do not help us but instead do drown us into inefficiency.

    Using analysed data which we do not have a grasp on, is oftenly completely inoperative from the point of view of taking a decision.

    Thirdly, even with small sensible tools like Excel an Powerpoint, we are not sure that the human organisations work better that with the old tools of the managers in the 50’s : paper an pencil. Since the end of WWII Global ROI of main world firms is ineluctably dropping, and noone can reverse the tendency.

    Lots of young players in Chess have weak minds and ideas because they rely too much on Computer during their training. Every GrandMaster know that the bulk ok of training analysis should be done at home on their own, and then just check on computer. Otherwise you do not have the strength necessary to be competitive.

    So ok Alpha Go is cool, it could relieve certainly a vast amount of stupid tasks from the minds of humans, but will they grow better ?

    I mean, just how strong can we be in our life and in our strategical decisions if we did not even experience one ounce of suffering or hard work before ?

  2. Marcel Popescu says:

    I agree with your analogy between biology and computing (I’ve been calling animals “robots” for the last few years, and I’m in awe of the complexity of their software).

    A few corrections, though:

    – junk DNA is not “unnecessary genes”; it’s DNA that’s outside of genes (only about 1.5% of the DNA is in genes – that is, code that ultimately generates proteins)

    – just as it was the case with “vestigial organs”, of which there were about a hundred and now we know that all of them are in fact useful, it appears that the “junk DNA” is also anything but; see http://www.wondersandmarvels.com/2016/03/our-genetic-dark-matter.html

    1. junk DNA is not “unnecessary genes”

      Point taken. Thanks.

      “junk DNA” is also anything but (…)

      The larger point is this… how much information is actually coded in our DNA. If some of the junk DNA has an effect… it is fine.

      The point is, in terms of bits of information, to describe the genetic background of an individual, you do not even need all of its genome. And even that fits on a USB key.

      So biology is not so far above computing…

    2. geek42 says:

      i will call those junk DNA as deprecated library-included functions just like in many dynamic linking software 😀

  3. Peter Turney says:
    1. I did not know about this book!

  4. trylks says:

    “At some point in this century, the difference between biology and computing will fade almost entirely.”

    That will happen when we can make backups of people. Imagine a surgeon using “Ctrl + z”. It would be awesome if we could get that in this century. Pursuing the low hanging fruit and planning only for the next 4 or 5 years (in the best cases) will more likely lead to economic collapse, IMHO.

    And then we have the gap between research and normal use. Self-drivig cars are already “real”, but when will the sales for self-driving cars surpass the traditional ones? AFAIK, that might never happen.

    1. And then we have the gap between research and normal use. Self-drivig cars are already “real”, but when will the sales for self-driving cars surpass the traditional ones? AFAIK, that might never happen.

      I am sure many people thought 20 years ago that the US would never have a black president.

  5. Ben says:

    “At some point in this century, the difference between biology and computing will fade almost entirely.”

    I’m really curious what gives you confidence in this statement? A couple examples to give context to my question:

    According to Wikipedia the spherical shape of Earth was first conclusively established in the 3rd century BC. Surely at that time there were people who at least strongly speculated about circumnavigating the globe. Yet it wasn’t until the 16th century that it happened. If you asked during antiquity how long it would take for that feat to be accomplished, do you think anyone could have reasonably guessed nearly 2,000 years?

    A more modern example. I wonder how many years you would have to go back before you would find a majority of technologically literate people predicting that flying cars would be a practical reality before self-driving cars. I bet not that many.

    My general point is the banal one that accurately predicting technological developments is hard. I believe that something like the bio/computer convergence you mention is likely to happen at some point, but I wouldn’t want to bet real money on whether it’s going to be closer to 50 years or 1,000.

    Any concrete reason for your timeline?

    1. My general point is the banal one that accurately predicting technological developments is hard.

      If you read this blog, you’ll know that I make no claim to be an exception in this regard. The “predictions” that I make (I have got a whole page of them http://lemire.me/blog/predictions/) are not meant to actually “predict” the future. My brain has no special ability.

      The purpose is to actually make us think.

      I expect most of my predictions are not going to come true. So what? The process of making them is what matters.

      When I read other people predictions, I learn things. I think “wow… could this really happen… let me see…”