Try our new documentation site (beta).
Next: Tolerances and user-scaling Up: Gurobi Guidelines for Numerical Previous: Avoid rounding of input
Real numbers are not real
To say that real numbers aren't real is not just a play on words, but a computational reality. Let's do a simple experiment: try the following in your favorite number-crunching tool. In Excel:
=IF(1+1E-016 = 1,1,0)will print 1. In Python:
>>> 1 == 1+1e-16 TrueIn C, the code
#include<stdio.h> int main(void) { if (1+1e-16 == 1) printf("True\n"); else printf("False\n"); return 0; }will print True. In R:
> 1 == 1+1e-16 [1] TRUE
Note that this behavior is not restricted to small numbers; it
also happens with larger numbers. For example:
>>> 1+1e16 == 1e16 TrueThis shows that the precision of the result depends on the relative scale of the involved numbers.
Although this behavior is typical, there are some exceptions. One is the GNU-bc command line tool:
> bc 1.0 == 1.0+10^(-16) 1 scale=20 1.0 == 1.0+10^(-16) 0 1.0 == 1.0+10^(-21) 1When we set the scale parameter to , the code is able to recognize that the numbers are different. This just shifts the bar, though; bc still fails to recognize the difference between the last two numbers. Another library that allows for extended, or even unlimited (up to memory) precision is the GNU Multiple Precision Arithmetic Library, but its details are beyond the scope of this document.
The reason for these failures is that computers must store numbers as a sequence of bits, and most common implementations adhere to the IEEE 754 standard. In particular, IEEE-754 sets the standard for double-precision format. This standard is so pervasive that almost all computers have specialized hardware to improve performance for operations on numbers represented as such. One consequence is that mathematical operations on alternative extended number representations tend to be significantly slower than operations on numbers represented following the IEEE 754 standard. Degradation of 10X or even 100X are common.
Due to the performance obtained from hardware support for
double-precision arithmetic, Gurobi relies on this standard (as does
most software). However, this speed comes at a cost: computed results
often differ from what mathematics may dictate. For example, the
associative property (a + (b + c) = (a + b) + c) is a
fundamental property of arithmetic, but double-precision arithmetic
gives (in Python):
>>> (1+1e-16)+1e-16 == 1 + (1e-16 + 1e-16) FalseFurthermore, many common numbers (e.g. 0.1) cannot be represented exactly.
Consequently, simple questions like whether two numbers are equal, or whether a number is equal zero, or whether a number is integral, can be quite complicated when using floating-point arithmetic.
Next: Tolerances and user-scaling Up: Gurobi Guidelines for Numerical Previous: Avoid rounding of input