Jump to content

Draft:0.30000000000000004

From Wikipedia, the free encyclopedia

0.30000000000000004 is the result in the most common example used on the internet to demonstrate small deviations between human common decimal fractions math and computer common binary floating point arithmetic.

The example is that 0.2 + 0.1 results in 0.30000000000000004 in binary floating-point with the most common datatype IEEE 754[1] 'double' ( binary64 ).

These deviations evolve because the binary datatypes can't exactly represent most of decimal fractions but use approximated binary deputies.

Thus the calculation becomes 0.2000000000000000111022... + 0.1000000000000000055511... , the binary exact result 0.3000000000000000166533... isn't representable in the datatype, would require 54 bit of significand where only 53 are available, and thus becomes rounded to a representable value, 0.3000000000000000444089... for which the 'SRT' ( Shortest Round Tripping ) decimal correspondent is 0.30000000000000004 .

The problem / example has got it's own website: 'Floating Point Math', 0.30000000000000004.com[1] .

A funny while not exact in every detail video about it can be found on youtube: 'Why do computers suck at math?' [2]

It is not! the only example, but one of myriads of similar deviations, it's not only additions but all types of calculations affected, and not all problem cases involve rounding, e.g. 4.4 + 2.2 -> 6.6000000000000005 fails without.

The deviations are often considered neglectable as being very very very small, that's right but they may stack up, multiply up, or even 'exponentionally explode' and then harm usability of computer calculated results.

References

[edit]
  1. ^ "754-2019 - IEEE Standard for Floating-Point Arithmetic ( caution: paywall )". 2019. Retrieved 2024-10-29.{{cite web}}: CS1 maint: url-status (link)