, 1 min read
Entropy-efficient Computing
Microprocessors and storage devices are subject to the second law of thermodynamics: using them turn usable energy (oil, hydrogen) into unusable energy (heat). Data centers are already limited by their power usage and heat production. Moreover, many new devices need to operate for a long time with little power: (1) smart phones (2) powerful computing devices inserted into our bodies (3) robots shipped in space.
Our approach to entropic efficiency remains crude. We improve power supply. We shut down disks and CPUs when they are idle. Deeper questions arise however:
- Except maybe for embarrassingly parallel problems (such as serving web pages), parallel computing trades entropic efficiency for short running times. If entropic efficiency is your goal, will you stay away from non-trivial parallelism?
- We analyze algorithms by considering their running time. For example, shuffling an array takes time O(n) whereas sorting it takes time O(n log n). Yet, unlike sorting, I expect that shuffling an array should be possible without any entropy cost (heat generation)!
Suppose I give you a problem, and I ask you to solve it using as little entropy as possible. How would you go about it?
Further reading:
- M. P. Frank, Physical Limits of Computing, Computing in Science and Engineering, vol. 4 (3), May/June 2002, p. 16-25.- R. Landauer, Irreversibility and Heat Generation in the Computing Process, IBM. Journal of Research and Development, vol. 5, no 3, 1961.- (Updated) C. H. Bennett, Logical reversibility of computation, IBM journal of Research and Development, 1973.