, 15 min read
How does your programming language handle “minus zero” (-0.0)?
22 thoughts on “How does your programming language handle “minus zero” (-0.0)?”
, 15 min read
22 thoughts on “How does your programming language handle “minus zero” (-0.0)?”
FWIW Rust also produces what you would expect:
fn main() {
let minus_zero = -0.0;
let plus_zero = 0.0;
let parsed = "-0.0".parse::<f64>().unwrap();
println!("{}", 1.0/minus_zero);
println!("{}", 1.0/plus_zero);
println!("{}", 1.0/parsed);
}
Output:
-inf
inf
-inf
Related: https://github.com/golang/go/issues/19675.
Try instead
minus_zero := 0.0
minus_zero *= -1
JavaScript:
const plusZero = 0;
const minusZero = -0;
const parsedMinus = parseInt("-0");
console.log({
minusZero: 1/minusZero, // -Infinity
plusZero: 1/plusZero, // Infinity
parsedMinus: 1/parsedMinus // -Infinity
});
let parsedMinus = parseFloat(“-0.0”)
console.log(1/parsedMinus) // -Infinity
Depending on the compiler and the optimisation option used, your test program measures compile time or runtime behaviour — which can but differ.
For C and C++, you should but declare the doubles as volatile and feed a volatile char xxx[] = “-0.0”; to strtod()
Are you aware of a compiler where the result of strtod would be computed at compile time?
Specifically, a compiler where the following function would become trivial…
I’d be very interested in knowing about such a system.
Evaluation of the expression strtod(constant, NULL) during compile time is a legitimate optimisation any C/C++ compiler is allowed to perform. There are C/C++ compilers which evaluate for example signbit(constant), log(constant), strlen(constant) or strcmp(constant, constant) etc., so why shouldn’t they not (be able to) evaluate strtod(constant, NULL) or strtol(constant, NULL, 0) too? The compiler already has the parser for the numbers, these library functions are part of the language and their semantics well-defined! Unless you can prove or safely assume that no compiler will ever optimize these expressions it’s better to exercise defensive programming.
I do not doubt that in all the examples provided in my blog post, an optimizing compiler is allowed to just memoize the output. Of course, it should ensure that the result is undistinguishable from what would happen if you were to run it without optimization. But I have never observe an optimizing compiler that would optimize away strtod and I’d be very interested in knowing about such a scenario.
More to the point of your reply… Is your expectation that by marking the string as volatile, thus precluding some optimizations, we will get different results ?
I still remember your blog post https://lemire.me/blog/2020/06/26/gcc-not-nearest/
The interpretation of floating point numbers during compile time may differ from run-time.
I just fed the following snippet to the (ancient) GCC on your EPYC system “Rome” and to LLVM 10.0:
int main()
{
double minus_0 = -0.0;
double plus_0 = +0.0;
return 1.0/minus_0 == 1.0/plus_0;
}
gcc -O3 -o- -s demo.c
main:
movsd .LC0(%rip), %xmm0
xorl %eax, %eax
movl $0, %edx
movapd %xmm0, %xmm1
divsd .LC2(%rip), %xmm0
divsd .LC1(%rip), %xmm1
ucomisd %xmm0, %xmm1
setnp %al
cmovne %edx, %eax
ret
.align 8
.LC0:
.long 0
.long 1072693248
.align 8
.LC1:
.long 0
.long -2147483648
.align 8
.LC2:
.long 0
.long 0
.ident “GCC: (GNU) 8.3.1 20190311 (Red Hat 8.3.1-3)”
clang -O3 -o- -s demo.c
main: # @main
xorl %eax, %eax
retq
.ident "AMD clang version 10.0.0 (CLANG: AOCC_2.2.0-Build#93 2020_06_25) (based on LLVM Mirror.Version.10.0.0)"
You see the difference?
To answer both of your questions:
1. no, I don’t know a compiler which optimises strtod(“-0-0”, 0) and evaluates it at compile time (but some which optimise strcmp(constant, constant) for example);
2. I expect that a volatile char zero[] = “-0.0”; strtod(zero, 0); should disable that optimisation — both GCC and clang but bail out with a warning “passing ‘volatile char *’ to parameter of type ‘const char *’ discards qualifiers”
Zig has a “comptime” environment, wherein code is evaluated at compile time; https://ziglang.org/documentation/master/#comptime
AFAIK the D language/compiler sports a similar feature and allows to evaluate code during compilation.
fn main() {
let minus = -0.0;
let plus = 0.0;
println!("{}", 1.0/minus);
println!("{}", 1.0/plus);
}
$ rustc minus.rs
$ ./minus
-inf
inf
Same behaviour with D as well.
void main() {
import std.stdio, std.conv;
double minus_zero = -0.0;
double plus_zero = +0.0;
double parsed = to!double("-0.0");
writeln(1/minus_zero);
writeln(1/plus_zero);
writeln(1/parsed);
}
$ ldc2 a.d
$ ./a
-inf
inf
-inf
$
Related to https://github.com/golang/go/issues/30951 : I assume in most programming languages the equivalent of “double x = -0” is the same as “double x = 0”. Meaning: -0 is interpreted as an integer (and, so, 0, as there is no negative 0 integer), which is then converted to a double.
Yes, but from what I can see, they will opt to at least warn you.
Negative zeros are one of the issues that we have to handle in our bit reproducibility tests. When we are unable to justify a sign, we will do something like
x = x + 0.0
which converts -0.0 into +0.0. And I think that this is the correct behavior via IEEE754. But I also don’t know if we can count on all platforms to do this.I just tried it in Ada.
with Ada.Float_Text_IO;
use Ada.Float_Text_IO;
procedure Main is
minus_zero : Float := -0.0;
plus_zero : Float := +0.0;
parsed : Float := Float'Value("-0.0");
begin
Put(1.0/minus_zero);
Put(1.0/plus_zero);
Put(1.0/parsed);
end Main;
Gave me “-Inf”, “+Inf”, “-Inf”
Someone needs to send this to the team working on Apple’s iPhone Weather App
Zero in math is kind of dumb. 1/0 should equal 1. Zero is null or nill or no thing.
Isn’t division by zero undefined? I would expect the result to be NaN, if not an error.
Isn’t division by zero undefined?
It is in the sense that it is not part of the real numbers, but here we are defining an extended set which includes -infinity and +infinity.
Whether that’s wise to do so is another story.