Lets not forget that hardware miniaturization is only one of the ways to achieve greater performance. Sooner or later more attention has to be devoted to reduce software bloat and to improve software optimization.
Many programmers who started their careers with hardware offering tens or hundreds of KB RAM and a few MHz of CPU clock speed are watching in horror multi-gigabyte installation files for many popular applications. Looking at my Windows 7 system I see the size of installed applications anywhere from 290KB to 8.2GB with median around 4.5MB, which is better than I expected, but it doesn’t mean that there is no problem.
In many ways, the gains from software have matched the gains from hardware over time…
It is true that business apps have typically terrible performance, but that’s because nobody cares… if you look at problems where performance matter… we have often software that is orders of magnitude faster than older software.
This being said, you are entirely correct that there is a lot of room for optimization at the software level, and not just constant factors.
Thankfully, tools (e.g., compilers, libraries) are improving all the time. One hopes that the nanobots that will live in our arteries in 2050 won’t be programmed in 2015-era Java using Eclipse.
RSFQ technology uses hardly any power (because the components are superconducting) and can easily run at clock frequencies of the order of 100 GHz. Of course the downside is that they RSFQs need cryogenic operating temperatures. However, this is not so much of an issue in large computing centers where the infrastructure is already a big investment. In a bit further away in the future, I could even imagine consumer level RSFQ processors with some kind of miniature cooling unit to keep the operating temperature low enough.
The operating principles of RSFQ processors have been proven, but there are still some engineering problems that need to be solved. However, it seems that the industry is not very interested in pursuing RSFQs for whatever reason. By googling, I can find several roadmaps and technology assessments (one by NSA) which all say that this technology should be doable within reasonable timescales.
This is an interesting perspective. As the number of cores increases, the cores would be come more and more independent. Clearly, if you 4 cores, they can be synced rather easily. Syncing thousands of cores would be way more problematic. Therefore, programming will become increasingly parallel. We already see this trend with GPUs and distributed systems.
A frequency of 100 GHz implies one pulse every 10 picoseconds. In 10 picoseconds, light travels 3 mm. Building chips 3 mm wide still implies cramming circuits in a very small space. That is fine for simple circuits, but for a generic processor core, that is too small given our current technology.
Lets not forget that hardware miniaturization is only one of the ways to achieve greater performance. Sooner or later more attention has to be devoted to reduce software bloat and to improve software optimization.
Many programmers who started their careers with hardware offering tens or hundreds of KB RAM and a few MHz of CPU clock speed are watching in horror multi-gigabyte installation files for many popular applications. Looking at my Windows 7 system I see the size of installed applications anywhere from 290KB to 8.2GB with median around 4.5MB, which is better than I expected, but it doesn’t mean that there is no problem.
@Paul
In many ways, the gains from software have matched the gains from hardware over time…
It is true that business apps have typically terrible performance, but that’s because nobody cares… if you look at problems where performance matter… we have often software that is orders of magnitude faster than older software.
This being said, you are entirely correct that there is a lot of room for optimization at the software level, and not just constant factors.
Thankfully, tools (e.g., compilers, libraries) are improving all the time. One hopes that the nanobots that will live in our arteries in 2050 won’t be programmed in 2015-era Java using Eclipse.
To me, one of the most promising non-conventional approach to make classical computers faster is rapid single flux quantum based processors (https://en.wikipedia.org/wiki/Rapid_single_flux_quantum).
RSFQ technology uses hardly any power (because the components are superconducting) and can easily run at clock frequencies of the order of 100 GHz. Of course the downside is that they RSFQs need cryogenic operating temperatures. However, this is not so much of an issue in large computing centers where the infrastructure is already a big investment. In a bit further away in the future, I could even imagine consumer level RSFQ processors with some kind of miniature cooling unit to keep the operating temperature low enough.
The operating principles of RSFQ processors have been proven, but there are still some engineering problems that need to be solved. However, it seems that the industry is not very interested in pursuing RSFQs for whatever reason. By googling, I can find several roadmaps and technology assessments (one by NSA) which all say that this technology should be doable within reasonable timescales.
This is an interesting perspective. As the number of cores increases, the cores would be come more and more independent. Clearly, if you 4 cores, they can be synced rather easily. Syncing thousands of cores would be way more problematic. Therefore, programming will become increasingly parallel. We already see this trend with GPUs and distributed systems.
@Anonymous
Is the solution to use RSFQ processors? Maybe.
But before we get too excited…
A frequency of 100 GHz implies one pulse every 10 picoseconds. In 10 picoseconds, light travels 3 mm. Building chips 3 mm wide still implies cramming circuits in a very small space. That is fine for simple circuits, but for a generic processor core, that is too small given our current technology.