In Representing and Intervening, Ian Hacking argues persuasively that theory and practice take turns leading. He gives numerous historical examples. Many of the examples are from physics, but I believe a careful analysis will show this turn-taking in computer science. For example, I believe Turing discovered the idea of a universal computer before any actual universal computer was built.
No idea is 100% original. If theory and practice take turns leading, then it follows that every new theory can be traced back to practical precedents, and every new pratice can be traced back to theoretical precedents. Hacking is very persuasive. I highly recommend his book.
Turing’s concept of a universal Turing machine (a Turing machine that can compute anything that any other Turing machine can compute) is certainly a big step forward in precision and depth from the vague idea that human computers can be automated.
Thanks Peter. Hacking’s ebook is $25. I find this a bit much. Of course, I can get it through the university (in hard copy), but then there is a time tax that comes in and the requirement to use paper. I will see if I can unearth an article that he has written on this topic.
My counterpoint is that we were already “programming” human computers with instructions that were Turing complete before Turing was born.
I was always told that early on, when Turing was thinking about computers, he was think about human computers. But even if he was thinking about electronic computers, he had to know about human computers, either directly or indirectly.
We can also trace back efforts, dating to the differential engine, to build ever more complete and automated mechanical computers. People were acutely aware, before Turing, that you could build mechanical computers that could do some of what human computers could do, but not quite everything. There was clearly a quest to build more complete mechanical computers. Going back to Babbage, there had been attempts to build more complete engines.
Granted, Turing’s contribution (founded on the work by Gödel and Hilbert) is remarkable in that it laid out nice principles, resolving the matters once and for all.
My view is that this is not very different from Watt and thermodynamics. We first had the engine and then thermodynamics. We first had computers and then we got computer science.
Granted I could be entirely wrong, and maybe theory drives practice as often as the reverse, I will investigate… but the Turing example does not convince me.
“We toyed with uranium (and got sick) long before we could build an atomic bomb.”
I don’t find this example fitting. Actually, we first discovered quantum mechanics and relativity (theory) and then used them to concoct atomic bombs (practice).
Rzlufsays:
Watt was not invented a steam engine. The idea has been known for hundreds of years. He build a first working and usable engine. Basic theory were created prior to this period. Many people have created more or less successful engines before him.
Today, such an approach based on a pure trial and error is almost impossible. For example, in the days of Edison, Tesla, it criticized as ineffective.
In my opinion the success is appropriate feedback loop between theory and practice.
darf ferrarasays:
A better (although closely related) example of theory leading practice would be Church’s lambda calculus. The theory directly led to McCarthy implementing LISP in hardware.
We spent decades toying with radioactive material without having a clue about what was going on. Theory followed much later.
Anonymoussays:
Like some other commenters I generally find the analogy of a seesaw back and forth between theory and practice more compelling than practice consistently leading with theory coming along behind organize a better story. Of course how compelling something is to me isn’t necessarily worth much.
One specific theory-to-practice pattern is when formal models allow some kind of extrapolation to an idea that no one had stumbled on yet. One small example that I like is Dan Grossman’s essay “The Transactional Memory / Garbage Collection Analogy”.
The discussion of theory vs. practice made me think of the excellent book “Antifragile: Things That Gain From Disorder” by Nassim Nicholas Taleb. In it, he mainly argues that theory follows practice.
In my review of the book, I summarized it like this:
There is also a section on universities and technological development. Do universities cause technical progress. Not according to Taleb. There is a tension between education, which loves order, and innovation, which loves disorder. A lot of technical innovations come from luck, tinkering, and trial and error. Often, theory comes after, but when a discovery or innovation is described afterwards, it seems more planned and ordered than it really was.
Regarding the Linus quote, I also like this (from David Kelly):
“Enlightened trial and error outperforms the planning of flawless intellects.”
In Representing and Intervening, Ian Hacking argues persuasively that theory and practice take turns leading. He gives numerous historical examples. Many of the examples are from physics, but I believe a careful analysis will show this turn-taking in computer science. For example, I believe Turing discovered the idea of a universal computer before any actual universal computer was built.
No idea is 100% original. If theory and practice take turns leading, then it follows that every new theory can be traced back to practical precedents, and every new pratice can be traced back to theoretical precedents. Hacking is very persuasive. I highly recommend his book.
Turing’s concept of a universal Turing machine (a Turing machine that can compute anything that any other Turing machine can compute) is certainly a big step forward in precision and depth from the vague idea that human computers can be automated.
I think it’s worth $25, easily. (Disclaimer: Hacking was my PhD advisor.)
@Peter
I have always viewed modern electronic computers as the automation of human computers, and these predate Turing.
@Peter
Thanks Peter. Hacking’s ebook is $25. I find this a bit much. Of course, I can get it through the university (in hard copy), but then there is a time tax that comes in and the requirement to use paper. I will see if I can unearth an article that he has written on this topic.
@Peter
My counterpoint is that we were already “programming” human computers with instructions that were Turing complete before Turing was born.
I was always told that early on, when Turing was thinking about computers, he was think about human computers. But even if he was thinking about electronic computers, he had to know about human computers, either directly or indirectly.
We can also trace back efforts, dating to the differential engine, to build ever more complete and automated mechanical computers. People were acutely aware, before Turing, that you could build mechanical computers that could do some of what human computers could do, but not quite everything. There was clearly a quest to build more complete mechanical computers. Going back to Babbage, there had been attempts to build more complete engines.
Granted, Turing’s contribution (founded on the work by Gödel and Hilbert) is remarkable in that it laid out nice principles, resolving the matters once and for all.
My view is that this is not very different from Watt and thermodynamics. We first had the engine and then thermodynamics. We first had computers and then we got computer science.
Granted I could be entirely wrong, and maybe theory drives practice as often as the reverse, I will investigate… but the Turing example does not convince me.
“We toyed with uranium (and got sick) long before we could build an atomic bomb.”
I don’t find this example fitting. Actually, we first discovered quantum mechanics and relativity (theory) and then used them to concoct atomic bombs (practice).
Watt was not invented a steam engine. The idea has been known for hundreds of years. He build a first working and usable engine. Basic theory were created prior to this period. Many people have created more or less successful engines before him.
Today, such an approach based on a pure trial and error is almost impossible. For example, in the days of Edison, Tesla, it criticized as ineffective.
In my opinion the success is appropriate feedback loop between theory and practice.
A better (although closely related) example of theory leading practice would be Church’s lambda calculus. The theory directly led to McCarthy implementing LISP in hardware.
@Poloni
We spent decades toying with radioactive material without having a clue about what was going on. Theory followed much later.
Like some other commenters I generally find the analogy of a seesaw back and forth between theory and practice more compelling than practice consistently leading with theory coming along behind organize a better story. Of course how compelling something is to me isn’t necessarily worth much.
One specific theory-to-practice pattern is when formal models allow some kind of extrapolation to an idea that no one had stumbled on yet. One small example that I like is Dan Grossman’s essay “The Transactional Memory / Garbage Collection Analogy”.
The discussion of theory vs. practice made me think of the excellent book “Antifragile: Things That Gain From Disorder” by Nassim Nicholas Taleb. In it, he mainly argues that theory follows practice.
In my review of the book, I summarized it like this:
There is also a section on universities and technological development. Do universities cause technical progress. Not according to Taleb. There is a tension between education, which loves order, and innovation, which loves disorder. A lot of technical innovations come from luck, tinkering, and trial and error. Often, theory comes after, but when a discovery or innovation is described afterwards, it seems more planned and ordered than it really was.
Regarding the Linus quote, I also like this (from David Kelly):
“Enlightened trial and error outperforms the planning of flawless intellects.”