Follow Techotopia on Twitter

On-line Guides
All Guides
eBook Store
iOS / Android
Linux for Beginners
Office Productivity
Linux Installation
Linux Security
Linux Utilities
Linux Virtualization
Linux Kernel
System/Network Admin
Programming
Scripting Languages
Development Tools
Web Development
GUI Toolkits/Desktop
Databases
Mail Systems
openSolaris
Eclipse Documentation
Techotopia.com
Virtuatopia.com
Answertopia.com

How To Guides
Virtualization
General System Admin
Linux Security
Linux Filesystems
Web Servers
Graphics & Desktop
PC Hardware
Windows
Problem Solutions
Privacy Policy

  




 

 

12.5 Representation Error

This section explains the "0.1" example in detail, and shows how you can perform an exact analysis of cases like this yourself. Basic familiarity with binary floating-point representation is assumed.

Representation error refers to the fact that most decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many others) often won't display the exact decimal number you expect:

    >>> 0.1
    0.10000000000000001

Why is that? 1/10 is not exactly representable as a binary fraction. Almost all machines today use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 double precision. IEEE 754 double precision numbers contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form $J/2^N$ where J is an integer containing exactly 53 bits. Rewriting $$ 1 / 10 \approx J / 2^N $$ as $$ J \approx 2^N / 10 $$ and recalling that J has exactly 53 bits (i.e. $2^{52} \le J < 2^{53}$), the best value for N is 56:

    >>> 2L**52
    4503599627370496L
    >>> 2L**53
    9007199254740992L
    >>> 2L**56/10
    7205759403792793L

That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value for J is then that quotient rounded:

    >>> q, r = divmod(2L**56, 10)
    >>> r
    6L

Since the remainder is more than half of 10, the best approximation is obtained by rounding up:

    >>> q+1
    7205759403792794L

Therefore the best possible approximation to 1/10 in double precision is that over $2^{56}$, or

    7205759403792794 / 72057594037927936

Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be exactly 1/10!

So the computer never "sees" 1/10: what it sees is the exact fraction given above, the best double approximation it can get:

    >>> .1 * 2L**56
    7205759403792794.0

If we multiply that fraction by $10^{30}$, we can see the (truncated) value of its 30 most significant decimal digits:

    >>> 7205759403792794L * 10L**30 / 2L**56
    100000000000000005551115123125L

meaning that the exact number stored in the computer is approximately equal to the decimal value 0.100000000000000005551115123125. Rounding that to 17 significant digits gives the 0.10000000000000001 that Python displays (well, will display on any IEEE conforming platform that does best-possible input and output conversions in its C library -- yours may not!).


 
 
  Published under the terms of the Python License Design by Interspire