The fact that ‘your computer’ is storing 8/9 as a floating point is an artifact of the programming language/math system you are using. Languages/systems that support a ‘full numeric tower’ can store rationals, and it’s very nice when that numeric equality is formally defined, as in the Scheme spec, or the Common Lisp spec.
It really does matter, in practical terms; real engineering disasters occur when the numeric representations and translations and limits are not understood or not formally defined.
I agree. The equality between two numbers is not a trivial issue.
Trying to figure out what x=y means for an engineer and whether it means the same thing for another engineer, is hard.
Hence, we have to expect semantic technologies (such as the ones you are no doubt working on) to have hard limits.
One point of this is to develop technologies with ‘soft’ limits. For example, checking whether 2 numbers are within some delta of one another, rather than equal. For the semantic technologies, we are using lots of statistical techniques in combination with symbolic techniques.
Another point is be careful of assumptions, and, to as great extent as practical, don’t build those into the engineered system. For example, it probably actually does not make sense to have a ‘=’ function for floating point numbers.
A third point is that ‘equality’ is tricky, and almost always means ‘equality for some purpose.’
Some years ago (at BNR) I was mired in the problem of floating-point representations for real numbers and, in an effort to obey the formal definitions for arithmetic operations – such as Commutativity and Distribution – we developed a constraint programming system on bounded intervals. It was an interesting system.
The problem, from a philosophical point of view, is with the concept of definition itself. Descartes, Frege, Brouwer, and so many others have tackled with it. But no definitive concept of definition was vastly accepted. Why? Because, simply, as turns out, not that any concept will do, it is just that a definition generally relies on an overlapping conceptual schema, build on a proposed convention by which it therefore becomes possible to infer or deduce what is so being demonstrated. We then can see that the concept of definition is a concept of a larger theory of demonstration or of proof theory, which has to contain some extraneous … defintions of its own in order to fulfill its function: semantics. And we then go back to Göddell and understand a little bit more of this intricate relation in between syntaxis and semantics.
(Remember, that in the AI wars, I am a ‘scruffy’)
The fact that ‘your computer’ is storing 8/9 as a floating point is an artifact of the programming language/math system you are using. Languages/systems that support a ‘full numeric tower’ can store rationals, and it’s very nice when that numeric equality is formally defined, as in the Scheme spec, or the Common Lisp spec.
It really does matter, in practical terms; real engineering disasters occur when the numeric representations and translations and limits are not understood or not formally defined.
for example:
http://www.ima.umn.edu/~arnold/disasters/ariane.html
I agree. The equality between two numbers is not a trivial issue.
Trying to figure out what x=y means for an engineer and whether it means the same thing for another engineer, is hard.
Hence, we have to expect semantic technologies (such as the ones you are no doubt working on) to have hard limits.
One point of this is to develop technologies with ‘soft’ limits. For example, checking whether 2 numbers are within some delta of one another, rather than equal. For the semantic technologies, we are using lots of statistical techniques in combination with symbolic techniques.
Another point is be careful of assumptions, and, to as great extent as practical, don’t build those into the engineered system. For example, it probably actually does not make sense to have a ‘=’ function for floating point numbers.
A third point is that ‘equality’ is tricky, and almost always means ‘equality for some purpose.’
Some years ago (at BNR) I was mired in the problem of floating-point representations for real numbers and, in an effort to obey the formal definitions for arithmetic operations – such as Commutativity and Distribution – we developed a constraint programming system on bounded intervals. It was an interesting system.
There’s a significant literature on the subject
http://www.google.ca/search?q=constraints+on+interval+logic+programming
You can also compute real numbers using continued fractions:
http://en.wikipedia.org/wiki/Continued_fraction
The problem, from a philosophical point of view, is with the concept of definition itself. Descartes, Frege, Brouwer, and so many others have tackled with it. But no definitive concept of definition was vastly accepted. Why? Because, simply, as turns out, not that any concept will do, it is just that a definition generally relies on an overlapping conceptual schema, build on a proposed convention by which it therefore becomes possible to infer or deduce what is so being demonstrated. We then can see that the concept of definition is a concept of a larger theory of demonstration or of proof theory, which has to contain some extraneous … defintions of its own in order to fulfill its function: semantics. And we then go back to Göddell and understand a little bit more of this intricate relation in between syntaxis and semantics.