I absolutely agree that dynamic languages are initially more productive and definitely easier to use than static languages.
My counter argument is that making code easier to write does not necessarily yield long term gains. In bigger projects, the lack of language formality can be detrimental to the maintenance bottom line.
Unit testing helps but we can’t forget that unit tests are a form of sampling and as such, they tend to suffer from selection bias.
Static analysis, while limited in what it can accomplish, it has a 100% success rate for what is covered.
Unit tests on the other hand have a failure rate due to limited input/output/state coverage and should not be considered as a replacement for basic error checking (type mismatch, undefined vars, etc.) but as a supplement to it.
Daniel, thank you for this post and for your reply on StackOverflow. I agree that duck typing speeds up the programming process. I’ve experienced that in Python, and to a lesser extent with C++ templates.
Assume you could eliminate typing concerns completely: not only do you not have to explicitly type variables, but in addition you have some magical assurance that you won’t introduce any type-related bugs. That would be great! But I don’t imagine that would make someone 2x more productive. Do people really spend half their programming time on type-related matters?
@Kevin Roughly: Ontologies are formal definitions, folksonomies are informal and flexible “de facto” definitions.
Daniel Haransays:
This week I presented Ruby on Rails to two senior developers at a company – one doing Java, the other .Net work.
I expect developers to express surprise at the speed of development, scaffolding, validations or migrations. I was shocked that script/console seemed to them the biggest gain. This is such a basic tool for me now, any language that doesn’t have it seems broken.
There’s a lot of culture and tools surrounding a language. Sometimes those tools require certain language features to even be possible. Either way, part of the reason programmers are more productive with these languages doesn’t have anything at all to do with the language itself. Refactoring support is nice in theory, but a faster REPL kicks it in the nuts.
Not having a lot of ceremony means a single person can try lots of small personal projects and *become a better hacker, faster*.
Type:
In 3 years working with Ruby, I’ve yet to have a type mismatch problem that wasn’t caught by unit tests.
Having worked on very large Java apps I’d say the sheer magnitude of the code base is a maintainability hazard. Smaller code bases do in fact yield long-term gains.
3 years ago, many people dissed RoR as a toy, but it was promptly copied by nearly every language community. “Oh, that’s nice, but that’s only for greenfield development”. Now it’s “Oh, that’s nice, but you couldn’t ever do a big project with Ruby”
The skeptics have a poor track record on this one 🙂
“In 3 years working with Ruby, I’ve yet to have a type mismatch problem that wasn’t caught by unit tests.”
In 15 years working with C++ I’ve yet to have a type mismatch problem that wasn’t caught by the compiler.
The difference is that I didn’t have to build or maintain the compiler 🙂
As I said on my previous comment, while static analysis might have a limited scope it has a 100% success rate and it does not cost you anything to build or maintain.
Unit tests should be considered a supplemental measure because it have a non-quantifiable failure rate. Your code could pass all your tests and still have silly type mismatch/undefined var errors in covered code due to uncovered input/state parameters.
I think Ruby gets credit that rightfully belongs to Ruby on Rails. RoR is designed to make routine web development easy. Programmers give up control of many details in exchange for fast development. That’s a great idea. Most developers don’t need the level of control they think they do.
But if DHH had been a Python programmer and had written a Python on Rails web framework, I imagine people would see equivalent productivity gains.
Some languages are statically typed but infer almost the entire type graph, such as Boo.
I believe you meant to say:
Dynamic, Latent, Structurally typed languages like Python or Ruby are considerably easier than static, manifest, nominally typed languages like Java and C++.
Also note that as far as usefulness of type graphs go, your two examples (Java and C++) mostly suck at being useful with them; neither language truly does type systems particularly well. For example, java does now have generics, but it cannot express generic positional wildcards (‘meta-generics’, where I say: X is any type which has exactly 1 generics parameter, where e.g. that parameter is called ‘Y’ and extends Number), its generics are not reified (at runtime, instances of objects forget their generics), and their generic types are not considered in the type graph (You cannot implement both Comparable and Comparable at the same time).
At the risk of sounding rude, it’s exactly these kinds of posts that keep the smart programmer AWAY from Python and Ruby. The rabid and mostly clueless blather of supporters on their type systems is a big fat red dot in the ‘reasons NOT to join this community’ column.
FWIW, I think you’re mostly on the right track, at least from small projects. The value of your static type graph grows exponential with the size of your code base.
Also: A big part of utility for static type graphs lies in your tooling. If you’re hacking away in notepad, no amount of type graph awesomeness (think Haskell) is really going to help all that much, which makes it so painful that there’s no good haskell editor. Java (and smalltalk, for those who know what that is) shows what can happen when people write nice tools: Amazing refactor support, no need to learn APIs through and through (auto-complete will tell you all you need to know), and interfaces that mostly write themselves, as well as very decent auto-bug-checking, generally in areas where you weren’t thinking of (so areas where unit tests don’t do particularly well, historically). You do need to kit out your java build process with PMD or FindBugs to get this, but that isn’t very difficult.
Now imagine those tools combined with tricks only serious type systems, such as haskell’s, can do, like automatically generating unit tests, or inferring a large chunk of the type graph, and its clear to me that the hypothetical best programming language ever would have static typing. We’re just not there yet.
those two comparables were meant to be Comparable of Integer, and Comparable of String, but the HTML tag remover got to them.
Daniel Haransays:
John: I think you are correct.
It’s telling though that you used Python as an example, it’s one of the few languages a high-level framework like Rails can effectively be built in.
If much of the power of Rails is from Ruby, a lot of the productivity gains are from cultural aspects. “Convention over configuration”, writing tests, YAGNI, DRY… the list goes on.
Daniel Haransays:
Daniel: without getting too formal about it, do you have a good definition of “productivity”?
Normally we measure as how fast you can get to a certain outcome, but with dynamic languages these outcomes are often quite different.
@Haran No, I don’t have a good definition of productivity. In fact, I would love to find a way to determine how productive I am, in general. Alas, I never found any satisfying measure. My salary is a poor measure of my productivity.
But that is also my whole point, formal definitions are not as useful as you may think. I don’t need a definition of productivity to work with the concept.
I think TCO would be the ideal metric of productivity. A language that can deliver a piece of software with a lower lifetime TCO should be considered more productive because more was produced for less.
TCO encompasses everything from time-to-market risk, development costs, maintenance cost, failure-rate cost as well any other overhead incurred.
Without a formal definition of metrics used for comparison and decision making, everything is just subjective opinion and nothing can be objectively quantified.
I do not understand the 4 by 2 chart comparing the 2 styles. Can you clarify it a little for me.
-For example how do you compare a folksonomy with an Ontology? you can say that a folksonomy is the informal version of a taxonomy, but an ontology has a much more complex structure, that includes logical inference.
-Why can’t static language have a sort of generic solution?
Anyway I completely agree that dynamic languages are easier, I am a phython fan.
Comment:I think you are kind of mixing definition with metric of productivity. I was wondering if you need a definition of a concept to be able to measure it. Or if you have a metric, is the definition allways deducible from it?
For example how do you compare a folksonomy with an Ontology? you can say that a folksonomy is the informal version of a taxonomy, but an ontology has a much more complex structure, that includes logical inference.
Ontologies require definitive definitions. You need to know the identity of object A if you are to use the ontology on object A.
Folksonomies do not require definitions. Yet, they effectively define classes and features.
Why can’t static language have a sort of generic solution?
It is not that they can’t, they do, but the make the programmer pay a price.
I was wondering if you need a definition of a concept to be able to measure it.
No. You don’t. I can tell you that it is warm or cold outside without any definition of what it means to be warm or cold.
I absolutely agree that dynamic languages are initially more productive and definitely easier to use than static languages.
My counter argument is that making code easier to write does not necessarily yield long term gains. In bigger projects, the lack of language formality can be detrimental to the maintenance bottom line.
Unit testing helps but we can’t forget that unit tests are a form of sampling and as such, they tend to suffer from selection bias.
Static analysis, while limited in what it can accomplish, it has a 100% success rate for what is covered.
Unit tests on the other hand have a failure rate due to limited input/output/state coverage and should not be considered as a replacement for basic error checking (type mismatch, undefined vars, etc.) but as a supplement to it.
Daniel, thank you for this post and for your reply on StackOverflow. I agree that duck typing speeds up the programming process. I’ve experienced that in Python, and to a lesser extent with C++ templates.
Assume you could eliminate typing concerns completely: not only do you not have to explicitly type variables, but in addition you have some magical assurance that you won’t introduce any type-related bugs. That would be great! But I don’t imagine that would make someone 2x more productive. Do people really spend half their programming time on type-related matters?
@Kevin Roughly: Ontologies are formal definitions, folksonomies are informal and flexible “de facto” definitions.
This week I presented Ruby on Rails to two senior developers at a company – one doing Java, the other .Net work.
I expect developers to express surprise at the speed of development, scaffolding, validations or migrations. I was shocked that script/console seemed to them the biggest gain. This is such a basic tool for me now, any language that doesn’t have it seems broken.
There’s a lot of culture and tools surrounding a language. Sometimes those tools require certain language features to even be possible. Either way, part of the reason programmers are more productive with these languages doesn’t have anything at all to do with the language itself. Refactoring support is nice in theory, but a faster REPL kicks it in the nuts.
Not having a lot of ceremony means a single person can try lots of small personal projects and *become a better hacker, faster*.
Type:
In 3 years working with Ruby, I’ve yet to have a type mismatch problem that wasn’t caught by unit tests.
Having worked on very large Java apps I’d say the sheer magnitude of the code base is a maintainability hazard. Smaller code bases do in fact yield long-term gains.
3 years ago, many people dissed RoR as a toy, but it was promptly copied by nearly every language community. “Oh, that’s nice, but that’s only for greenfield development”. Now it’s “Oh, that’s nice, but you couldn’t ever do a big project with Ruby”
The skeptics have a poor track record on this one 🙂
@Reinier Thanks for the insights and corrections. But I doubt my blog post will turn even one programmer away from Python. 😉
“In 3 years working with Ruby, I’ve yet to have a type mismatch problem that wasn’t caught by unit tests.”
In 15 years working with C++ I’ve yet to have a type mismatch problem that wasn’t caught by the compiler.
The difference is that I didn’t have to build or maintain the compiler 🙂
As I said on my previous comment, while static analysis might have a limited scope it has a 100% success rate and it does not cost you anything to build or maintain.
Unit tests should be considered a supplemental measure because it have a non-quantifiable failure rate. Your code could pass all your tests and still have silly type mismatch/undefined var errors in covered code due to uncovered input/state parameters.
I think Ruby gets credit that rightfully belongs to Ruby on Rails. RoR is designed to make routine web development easy. Programmers give up control of many details in exchange for fast development. That’s a great idea. Most developers don’t need the level of control they think they do.
But if DHH had been a Python programmer and had written a Python on Rails web framework, I imagine people would see equivalent productivity gains.
Duck typing isn’t static typing.
Some languages are statically typed but infer almost the entire type graph, such as Boo.
I believe you meant to say:
Dynamic, Latent, Structurally typed languages like Python or Ruby are considerably easier than static, manifest, nominally typed languages like Java and C++.
Also note that as far as usefulness of type graphs go, your two examples (Java and C++) mostly suck at being useful with them; neither language truly does type systems particularly well. For example, java does now have generics, but it cannot express generic positional wildcards (‘meta-generics’, where I say: X is any type which has exactly 1 generics parameter, where e.g. that parameter is called ‘Y’ and extends Number), its generics are not reified (at runtime, instances of objects forget their generics), and their generic types are not considered in the type graph (You cannot implement both Comparable and Comparable at the same time).
At the risk of sounding rude, it’s exactly these kinds of posts that keep the smart programmer AWAY from Python and Ruby. The rabid and mostly clueless blather of supporters on their type systems is a big fat red dot in the ‘reasons NOT to join this community’ column.
FWIW, I think you’re mostly on the right track, at least from small projects. The value of your static type graph grows exponential with the size of your code base.
Also: A big part of utility for static type graphs lies in your tooling. If you’re hacking away in notepad, no amount of type graph awesomeness (think Haskell) is really going to help all that much, which makes it so painful that there’s no good haskell editor. Java (and smalltalk, for those who know what that is) shows what can happen when people write nice tools: Amazing refactor support, no need to learn APIs through and through (auto-complete will tell you all you need to know), and interfaces that mostly write themselves, as well as very decent auto-bug-checking, generally in areas where you weren’t thinking of (so areas where unit tests don’t do particularly well, historically). You do need to kit out your java build process with PMD or FindBugs to get this, but that isn’t very difficult.
Now imagine those tools combined with tricks only serious type systems, such as haskell’s, can do, like automatically generating unit tests, or inferring a large chunk of the type graph, and its clear to me that the hypothetical best programming language ever would have static typing. We’re just not there yet.
those two comparables were meant to be Comparable of Integer, and Comparable of String, but the HTML tag remover got to them.
John: I think you are correct.
It’s telling though that you used Python as an example, it’s one of the few languages a high-level framework like Rails can effectively be built in.
If much of the power of Rails is from Ruby, a lot of the productivity gains are from cultural aspects. “Convention over configuration”, writing tests, YAGNI, DRY… the list goes on.
Daniel: without getting too formal about it, do you have a good definition of “productivity”?
Normally we measure as how fast you can get to a certain outcome, but with dynamic languages these outcomes are often quite different.
Could anyone explain the second difference: Ontologies vs Folksonomies? I don’t get it.
@Haran No, I don’t have a good definition of productivity. In fact, I would love to find a way to determine how productive I am, in general. Alas, I never found any satisfying measure. My salary is a poor measure of my productivity.
But that is also my whole point, formal definitions are not as useful as you may think. I don’t need a definition of productivity to work with the concept.
I think TCO would be the ideal metric of productivity. A language that can deliver a piece of software with a lower lifetime TCO should be considered more productive because more was produced for less.
TCO encompasses everything from time-to-market risk, development costs, maintenance cost, failure-rate cost as well any other overhead incurred.
Without a formal definition of metrics used for comparison and decision making, everything is just subjective opinion and nothing can be objectively quantified.
I do not understand the 4 by 2 chart comparing the 2 styles. Can you clarify it a little for me.
-For example how do you compare a folksonomy with an Ontology? you can say that a folksonomy is the informal version of a taxonomy, but an ontology has a much more complex structure, that includes logical inference.
-Why can’t static language have a sort of generic solution?
Anyway I completely agree that dynamic languages are easier, I am a phython fan.
Comment:I think you are kind of mixing definition with metric of productivity. I was wondering if you need a definition of a concept to be able to measure it. Or if you have a metric, is the definition allways deducible from it?
For example how do you compare a folksonomy with an Ontology? you can say that a folksonomy is the informal version of a taxonomy, but an ontology has a much more complex structure, that includes logical inference.
Ontologies require definitive definitions. You need to know the identity of object A if you are to use the ontology on object A.
Folksonomies do not require definitions. Yet, they effectively define classes and features.
Why can’t static language have a sort of generic solution?
It is not that they can’t, they do, but the make the programmer pay a price.
I was wondering if you need a definition of a concept to be able to measure it.
No. You don’t. I can tell you that it is warm or cold outside without any definition of what it means to be warm or cold.
Thank you very much for your clarifications!