Daniel Lemire's blog

, 16 min read

The future of innovation is in software

12 thoughts on “The future of innovation is in software”

  1. Zhenyu Ye says:

    It is not proper to compare the harware designer with the house builder. We should compare them with the people who build great skyscrapers, cross-sea bridges, spacecraft… Hardware designers use the most cutting edge material, techniques, and design philosophy to build the bases for the evolution of software.

  2. Adam bossy says:

    I tend to agree, but I wonder if it’s because of my academic CS background and hence a biased perspective. However, I feel software will be necessary to drive those innovations. The capacity of the human mind has inherently limitations that can be augmented by tasks that only computers can perform.

  3. The capacity of the human mind has inherently limitations that can be augmented by tasks that only computers can perform.

    Exactly. I don’t care to make smarter-than-human machines, but I want to become smarter than any other human being in history. 😉

  4. hushedser says:

    I think you’re right about the future. I have taken 3 online university classes so far and I have found them to be fantastic. It truly is amazing what you can learn without leaving your home.

  5. I would love to see (or work on!) solutions for much better tele-conferencing for the purposes of academic conferences. Maybe something like Second Life would work, but it has to be more responsive.

    This kind of “tele-world” seems more likely than the the pie-in-the-sky idealism attributed to the 1950s. I’m not going to hold my breath, however.

  6. Sylvie Noel says:

    It will probably be a while before tele-work really takes off, if ever. Human beings crave social contact and a large part of work has to do with social contact. And no matter how good you make a tele-conferencing system, it just won’t support the impromptu meetings you can have with your colleagues when you meet them in the corridor at a conference.

    Maybe we should be building a conference-oriented virtual world that supports this kind of casual meeting in addition to official presentations. But then, I’d never get to go to Europe 🙂

  7. @hushedser Even if you attend classes on campus, how much of the “learning” do you do during the classes, and how much do you do back at home or in the local café?

    @Geoff In Canada, during the last 20 years, between 15% to 20% of all “employees” moved at home and became self-employed. In some field, like translation, the majority of the workforce is made of people working from their basement. I know many, many self-employed web developers who work from cafés and their homes.

    @Sylvie I have four answers:

    1) In the USA alone, there are over 30 million telecommuters. That’s a country larger than Canada. (See http://www.itfacts.biz/100-mln-americans-to-telecommute-in-2008/5439 ) There is a sharp and sustained growth of this number for as far back as we can see.

    2) Claiming that tele-work is less efficient is easy. Proving it is difficult. In fact, studies after studies show that telecommuters are as productive as office-bound employees, and generally happier. The people who are unhappy are those who stay behind. They have the impression of “taking up the slack”. Unfortunately, these unhappy people often include your boss…

    3) Claiming that long-distance travel is required for scientific collaboration is quite certainly wrong. I have worked time and time again with people I have never met, or hardly ever met. As for information broadcasting… people who attend conferences are not better informed that those who only use online tools (blogs, twitter, online PDFs, email…).

    4) Betting against technology is dangerous. We have an ever increasing bandwidth. We have better tools every year. Meanwhile, live conferences are not getting better. In fact, when I go to conferences, I see more and more people hiding in a corner checking emails, or playing with their cell phones. I am not saying live conferences will go away, but they will be replaced, mostly (80%), by cheaper and more efficient means.

  8. Sylvie Noel says:

    Ah but I never said it wasn’t efficient! What I said is that an unfortunate aspect of teleworking is the lack of face-to-face contact with other people. We are a social animal and we crave interactions with others.

    Also, as a CSCW person, I am not betting against the technology. But I am saying that if the technology we build does not take into account people’s needs for impromptu, unscheduled, relaxed interactions with others, then we are failing at supporting one of humanity’s basic needs.

    I think this is the reason applications like Twitter and Facebook are so popular, because they let people interact in a relaxed way.

  9. @Sylvie I agree Sylvie, but in any one day, I will interact with between 10 and 50 people directly, not to mention that probably 1500 people or more will read what I wrote that very day. It beats the social interaction you get in a cubicle within a top-down hierarchy easily!!!

    Don’t tell me that you play WoW for the computers and the dragons? You play for the *people*. And don’t tell me that this interaction is not powerful!

    Also, and this is very important: the quality of this interaction I have with others online improves every year. I would even argue that it improves drastically. Even blogging is getting richer and more dynamic all the time. The things that wordpress will do for you is just crazy. But blogging is already old school!

    My claim is that physical distance will mostly not matter in the near future (assuming it even does right now), but social interaction in the workplace will grow tremendously.

  10. Kevembuangga says:

    I would rather say that “the future of software is in innovation”.
    As a retired software engineer I am still awaiting the breakthroughs I was promised when I started my career (we apologize for teaching you a programming language, in a short while computers will be able to program themselves, LOL).
    I haven’t seen any breakthrough only so called “improvements” and we are still stuck with the limitations exposed in the famous “Mythical Man-Month” from Fred Brooks.
    I don’t expect that people like the uber-nerds at Lambda the Ultimate will ever come up with real breakthroughs, they just enjoy playing around with ever more cryptic finesses which, though clever and useful, have very limited practical scope.
    OTOH, fundamental problems known from the very beginning of the era lay unexplored, for instance a 1977 paper
    from Lambert Meertens:
    “From abstract variable to concrete representation”
    in “New Directions in Algorithmic Languages 1976” ed. S.A. Schuman
    is nowhere referenced in the literature whereas I deem it the most important paper I have ever read about program semantics.
    It’s not available online (of course) and the questions it raised are neither solved nor even seriously tackled in current CS research.
    It’s soooo much more enjoyable to engage in hair splitting about minute technical matters that to try to investigate deep problems, it make for a lot of “nice” publications and CS is all about publishing not really solving problems.

  11. 1) The author of the paper in question himself does not seem to have followed up on this work in the last few years (http://www.kestrel.edu/home/people/meertens/publications/).

    2) Everyone works by trial and error. The myth of the researcher who sits down and has deep thoughts about “P=NP” until one day, he yells “Eureka!” is just that, a myth. Being sane, most researchers work on problem where it is plausible they can make some progress in a few months by working in small increments each day. So, nobody sits down and says “I’ll prove P=NP” or I’ll cure “cancer”. Researchers work on small problems that are directly or indirectly related to the big issues. For the purpose of getting grants, researchers become good writers and make up a story about how they are going to cure cancer any day now, but the truth is that they have no idea whether this will happen.

    3) AI researchers have overpromised and underdelivered. I guess that lots of bankers on Wall Street have done precisely the same thing. Promising the world, getting the money to fund your “great” work, and then failing is quite common. We are impressed by people who run large laboratories, promise great things, and burn a lot of money. Often the media, the graduate students, and the general public see these people like heroes. In truth, a lot of them are good actors and can tell a creative story. (Disclaimer: I got a $1 million grant not long ago.)

  12. Kevembuangga says:

    The author of the paper in question himself does not seem to have followed up on this work in the last few years.

    Of course he didn’t, even within the years following the paper.
    He would have had a hard time having anything published along this vein because this would have been seen as “speculative” (also note that this paper is a book chapter not a journal publication).
    But this is what is wrong with scientific publishing, most especially nowadays, only tightened, “well defined”, topics are deemed worthy of publication.
    There is no way to thoroughly discuss new ideas until they are sufficiently delineated, but then 95% of the really critical work has already been done.

    The myth of the researcher who sits down and has deep thoughts about “P=NP” until one day, he yells “Eureka!” is just that, a myth.

    Huh?
    Where did I suggest anything like that?
    On the contrary I am saying that the most creative part of research doesn’t pass the publishing barrier and therefore is hardly amenable to cooperation, furthermore, whenever it does by some stroke of luck it is buried in a flurry of nifty but poorly significant “practical” results.

    Researchers work on small problems that are directly or indirectly related to the big issues.

    There is a hidden, very optimistic, assumption here, namely that the researcher is true about the relation between the small problems and the big issues.
    Two millenia of scholastic works around the ideas of Aristotle (the four elements, etc…) went nowhere.
    How do you know that some of the approaches of the “big issues” are not fundamentally flawed in similar ways?

    AI researchers have overpromised and underdelivered.

    Ha! Thanks for this perfect example.
    The whole research in AI was (and still mostly is) predicated on the idea that logic and solving math problems were the “keys” to AI because this is what is hard for human beings.
    And indeed researchers became good writers and made up stories (Cyc) burning money for decades, but they had to to make a living and the fault is not really on their side, it rather comes from the institutional constraints and the nitwits making the grants decisions, like… the military…
    As you acknowledge, to get funding you have to promise great things in short order and within reach of the understanding of the investors!!!
    But if your whole scheme is headed in a wrong direction because of inappropriate premisses about where to search how can you fix that?
    I suggest that instead of exploring all blind alleys (in an unbounded search space 🙂 ) you have to “question your questions” once in while and that there must some support for such endeavours.
    When the whole landscape of physic was overhauled at the beginning of the 20th century it was by a drastic reshaping of the base ideas not by “solving small problems” and it wasn’t the work of a lone genius but truly from a crowd of cooperating “geniuses”, may be the social setting of science was then a bit different.
    Also, they were stuck with unsolved questions and they had to change the rules, they could not keep going with “nifty promising results”.
    This is probably the worst impact of the recent huge extension of science, there is such an overwhelming amount of “practical stuff” to be milked out of the current knowledge stock that it overshadows more fundamental questions.
    This is what I mean when I make an example of the LTU nerds as being somehow irrelevant.

    However, the shortest most efficient path to building up the promised abundance of useful results one can foresee is likely not to have hordes of high level scientists grinding out one small bit of high tech wonder after another but rather to understand how this grunt work (in spite of its sophistication) can be automated.

    P.S. Did anyone on the blog ever read Meertens paper?