Daniel Lemire's blog

, 9 min read

Would an artificial intelligence “grow old”?

6 thoughts on “Would an artificial intelligence “grow old”?”

  1. Ben Babcock says:

    I wonder how much of the original Linux or Apache codebase has survived to the present day. Perhaps these projects are the exception because of the amount of changes. So we don’t have a linear relationship but a quadratic one: software that receives few modifications or a great many is resilient, while software that only receives a few, infrequent updates is susceptible.

    So an AI would keep growing, keep learning, but it would become a Ship of Theseus, where eventually all of its original code has been replaced.

  2. @Ben

    I wonder how much of the original Linux or Apache codebase has survived to the present day. Perhaps these projects are the exception because of the amount of changes.

    I do not know how much of the original Linux kernel source code remains today. Probably very little. (We could easily quantify this problem since we do have the source code.)

    Is it an exception? I think not. Most code that I have constantly updated for many years has been rewritten iteratively many times.

    So an AI would keep growing, keep learning, but it would become a Ship of Theseus, where eventually all of its original code has been replaced.

    Most cells in your body will die and be replaced in the next few months. Your connectome is constantly changing. You are probably very different, as far as your brain is concerned, from when you were 1 year old.

    But change itself is not aging. There are organisms that are effectively immortal, yet their cells are being replaced all the time.

    We would definitively expect an AI to be evolving deeply. In fact, we should expect an AI to be able to evolve at an accelerated rate.

  3. Andre Vellino says:

    I know for a fact that Hanson’s law of computing is true in at least one software intensive environment: good old fashioned centralized telephone switches. The bigger Nortel switching software grew, the more brittle it became.

    I think the Hanson’s law is true for *some* kinds of software systems – those that are, as you say Daniel, not modular or flexible – and that is encouraged by some kinds of software development methodologies (e.g. “cathedrals”).

    In Nortel switching software, everything depended on everything else (not quite an accurate characterization, but close). They saw this happening and did a lot of refactoring but (I’m quoting a director) “it was like changing the wheels on a tractor-trailer while it was in motion”.

    In addition there was as culture of code at Nortel that encouraged a lot of cloning. “I’m being told to write ‘feature X’ so I’ll just copy ‘feature Y’ written by my buddy in the next cubicle and modify the bits I understand to do what I need.

    Hence the volume of code would grow very fast, compile very slowly, propagate hidden bugs and so on. Quite a bit of effort was spent at Nortel writing meta-code that would analyse how the cloning was happening.

    Really smart software systems that are able to build new ones are not going to encounter that problem – or at least, they will solve it.

  4. Angelo says:

    Your interesting post made me write this one, sort-of related.

    http://c0de517e.blogspot.ca/2015/07/the-following-provides-no-answers-just.html

  5. Alex says:

    “Torvalds wrote the original Linux kernel as a tool to run Unix on 386 PCs… Modern-day Linux is thousands of times more flexible.”

    It’s more flexible in some ways, but less in others.

    For example, clearly Linux runs on thousands of off-the-shelf computers today, which it did not originally. Then again, have you ever tried to write a new module for Linux? In the 0.0.x days, it was super easy to extend Linux. Today, it’s huge and complex and extremely daunting to get started. A modern Linux module needs dozens of things to be perfect before it will even load.

    Or let’s say you want to change some interface. In the 0.0.x days, this was easy: you just change it. Today you’ve got to deal with hundreds of device drivers, millions (billions? probably) of installed copies of the kernel that can’t or won’t be upgraded, thousands of programmers who know and expect the old way.

    Linux today has a lot more mass than in the 386 days. That mass can support an incredible array of devices, and work around real-world problems with those devices, and even make it run faster than before. But it’s still mass. That makes it more complex, and slower to change course. The market has declared that “runs on every PC” is more valuable than “is easy to understand and hack on”, and that’s fine, but I wouldn’t go so far as to say it’s “thousands of times more flexible”. To me, it’s less flexible.

    1. For example, clearly Linux runs on thousands of off-the-shelf computers today, which it did not originally.

      Linux today runs on everything from routers, televisions to mobile phones (Android), game consoles (Steam) and all the way supercomputers. When Linus started out, Linux was good for one thing: a fun week-end project. Today it is a massively powerful tool used for purposes Linus could not even imagine. I stand by my statement, Linux is orders of magnitude more flexible.

      Then again, have you ever tried to write a new module for Linux?

      Yes. And I have had students with relatively little experience do it. If you know C, it is a simple matter. Moreover, you can do it today on platforms where Linux could not run years ago. And, come on, a kid can compile a custom Linux kernel with ease today. It is also better documented than it never was. But let us concede the point that, in general, programming today is more daunting than it was 20 years ago. Compilers have more options. We have more libraries. Libraries have gotten larger. Kernels have gotten much larger. Hardware is far more sophisticated. We have more tools. But programmers today can achieve so much more… You simply could not do a lot with a computer 20 years ago. They were simpler… but also far more limited.

      Let me work by analogy. Is a man living in a hut in 1000BC more flexible than a man living in Los Angeles today? No. The man from 1000BC had a simple life, but his options were drastically limited. Today, a men in Los Angeles can do so many things…

      Or let’s say you want to change some interface.

      It is a choice, right? You can be like Apple and just do it. Or you can be cautious and preserve backward compatibility all the way to the beginning of the universe.

      You can choose to age your software by restricting it so that it can only do whatever it did in the past. Or you can expand it as needed. The software industry tends to favor the latter.

      That makes it more complex, and slower to change course.

      Intuitively, one might think so, but does it? One might think that operating systems today are stuck and can’t evolve. But we have lots of contrary evidence. They are changing fast.

      “is easy to understand and hack on”

      You can hack Linux to do things that were improbable 20 years ago… I do lots of hacking for fun… See http://lemire.me/blog/2016/04/02/setting-up-a-robust-minecraft-server-on-a-raspberry-pi/