Rounding: Difference between revisions
authorlinks |
|||
Line 12: | Line 12: | ||
Rounding has many similarities to the [[quantization (signal processing)|quantization]] that occurs when [[physical quantity|physical quantities]] must be encoded by numbers or [[Digital signal (signal processing)|digital signal]]s. |
Rounding has many similarities to the [[quantization (signal processing)|quantization]] that occurs when [[physical quantity|physical quantities]] must be encoded by numbers or [[Digital signal (signal processing)|digital signal]]s. |
||
A wavy [[equals sign]] (<big>'''[[≈]]'''</big><!-- unicode U+2248 -->: ''approximately equal to'') is sometimes used to indicate rounding of exact numbers, e.g., 0.75 ≈ 1. This sign was introduced by [[Alfred George Greenhill]] in 1892.<ref>Isaiah Lankham, Bruno Nachtergaele, |
A wavy [[equals sign]] (<big>'''[[≈]]'''</big><!-- unicode U+2248 -->: ''approximately equal to'') is sometimes used to indicate rounding of exact numbers, e.g., 0.75 ≈ 1. This sign was introduced by [[Alfred George Greenhill]] in 1892.<ref>Isaiah Lankham, [[Bruno Nachtergaele]], [[Desperate Housewives (season 5)]]: ''Linear Algebra as an Introduction to Abstract Mathematics.'' World Scientific, Singapur 2016, {{ISBN|978-981-4730-35-8}}, S. 186.</ref> |
||
Ideal characteristics of rounding methods include: |
Ideal characteristics of rounding methods include: |
Revision as of 05:40, 8 November 2018
This article needs additional citations for verification. (October 2017) |
Rounding a numerical value means replacing it by another value that is approximately equal but has a shorter, simpler, or more explicit representation; for example, replacing $23.4476 with $23.45, or the fraction 312/941 with 1/3, or the expression √2 with 1.414.
Rounding is often done to obtain a value that is easier to report and communicate than the original. Rounding can also be important to avoid misleadingly precise reporting of a computed number, measurement or estimate; for example, a quantity that was computed as 123,456 but is known to be accurate only to within a few hundred units is better stated as "about 123,500".
On the other hand, rounding of exact numbers will introduce some round-off error in the reported result. Rounding is almost unavoidable when reporting many computations – especially when dividing two numbers in integer or fixed-point arithmetic; when computing mathematical functions such as square roots, logarithms, and sines; or when using a floating-point representation with a fixed number of significant digits. In a sequence of calculations, these rounding errors generally accumulate, and in certain ill-conditioned cases they may make the result meaningless.
Accurate rounding of transcendental mathematical functions is difficult because the number of extra digits that need to be calculated to resolve whether to round up or down cannot be known in advance. This problem is known as "the table-maker's dilemma".
Rounding has many similarities to the quantization that occurs when physical quantities must be encoded by numbers or digital signals.
A wavy equals sign (≈: approximately equal to) is sometimes used to indicate rounding of exact numbers, e.g., 0.75 ≈ 1. This sign was introduced by Alfred George Greenhill in 1892.[1]
Ideal characteristics of rounding methods include:
- Rounding should be done by a function. This way, when the same input is rounded in different instances, the output is unchanged.
- Calculations done with rounding should be close to those done without rounding.
- As a result of (1) and (2), the output from rounding should be close to its input, often as close as possible by some metric.
- To be considered rounding, the range will be a subset of the domain. Additionally, the range will have cardinality or less. The classical range is the integers, Z
- Rounding should preserve symmetries that already exist between the domain and range. With finite precision (or a discrete domain) this translates to removing bias.
- A rounding method should have utility in computer science or human arithmetic where finite precision is used, and speed is a consideration.
But, because it is not usually possible for a method to satisfy all ideal characteristics, many methods exist.
As a general rule, rounding is idempotent,[2] i.e., once a number has been rounded, rounding it again will not change its value. In practice, rounding functions are also monotonic.
Types of rounding
Typical rounding problems include:
Rounding problem | Example input | Result | Rounding criteria |
---|---|---|---|
approximating an irrational number by a fraction | π | 22/7 | denominator < 10 |
approximating a rational number by another fraction with smaller numerator and denominator | 312/941 | 1/3 | denominator < 10 |
approximating a fraction, which have periodic decimal expansion, by a finite decimal fraction | 5/3 | 1.6667 | 4 trailing digits |
approximating a fractional decimal number by one with fewer digits | 2.1784 | 2.18 | 2 trailing digits |
approximating a decimal integer by an integer with more trailing zeros | 23,217 | 23,200 | 2 trailing zeros |
approximating a large decimal integer using scientific notation | 300,999,999 | 3.01 x 108 | 2 trailing digits |
approximating a value by a multiple of a specified amount | 48.2 | 45 | multiple of 15 |
Rounding to integer
The most basic form of rounding is to replace an arbitrary number by an integer. All the following rounding modes are concrete implementations of an abstract single-argument "round()" procedure. These are true functions with the exception of those that are random-based.
Directed rounding to an integer
These four methods are called directed rounding, as the displacements from the original number x to the rounded value y are all directed towards or away from the same limiting value (0, +∞, or −∞). Directed rounding is used in Interval arithmetic and is often required in financial calculations.
If x is positive, round-down is the same as round-towards-zero, and round-up is the same as round-away-from-zero. If x is negative, round-down is the same as round-away-from-zero, and round-up is the same as round-towards-zero. In any case, if x is integer, y is just x.
Where many calculations are done in sequence, the choice of rounding method can have a very significant effect on the result. A famous instance involved a new index set up by the Vancouver Stock Exchange in 1982. It was initially set at 1000.000 (three decimal places of accuracy), and after 22 months had fallen to about 520 — whereas stock prices had generally increased in the period. The problem was caused by the index being recalculated thousands of times daily, and always being rounded down to 3 decimal places, in such a way that the rounding errors accumulated. Recalculating with better rounding gave an index value of 1098.892 at the end of the same period.[3]
For the examples below, sgn(x) refers to the sign function applied to the original number, x.
Rounding down
- round down (or take the floor, or round towards minus infinity): y is the largest integer that does not exceed x.
Rounding up
- round up (or take the ceiling, or round towards plus infinity): y is the smallest integer that is not less than x.
Rounding towards zero
- round towards zero (or truncate, or round away from infinity): y is the integer part of x, without its fraction digits.
Rounding away from zero
- round away from zero (or round towards infinity): if x is an integer, y is x; else y is the integer that is closest to 0 and is such that x is between 0 and y.
Rounding to the nearest integer
Rounding a number x to the nearest integer requires some tie-breaking rule for those cases when x is exactly half-way between two integers — that is, when the fraction part of x is exactly 0.5.
If it were not for the 0.5 fractional parts, the round-off errors introduced by the round to nearest method would be symmetric: for every fraction that gets rounded up (such as 0.268), there is a complementary fraction (namely, 0.732) that gets rounded down by the same amount.
When rounding a large set of fixed point numbers with uniformly distributed fractional parts, the rounding errors by all values, with the omission of those having 0.5 fractional part, would statistically compensate each other. This means that the expected (average) value of the rounded numbers is equal to the expected value of the original numbers when we remove numbers with fractional part 0.5 from the set.
In practice floating point numbers are typically used which have even more computational nuances because they are not equally spaced.
Round half up
The following tie-breaking rule, called round half up (or round half towards positive infinity), is widely used in many disciplines.[citation needed] That is, half-way values of x are always rounded up.
- If the fraction of x is exactly 0.5, then y = x + 0.5
For example, by this rule the value 23.5 gets rounded to 24, but −23.5 gets rounded to −23.
However, some programming languages (such as Java, Python) define their half up as round half away from zero here.[4][5]
This method only requires checking one digit to determine rounding direction in 2's complement and similar representations.
Round half down
One may also use round half down (or round half towards negative infinity) as opposed to the more common round half up.
- If the fraction of x is exactly 0.5, then y = x − 0.5
For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −24.
Round half towards zero
One may also round half towards zero (or round half away from infinity) as opposed to the conventional round half away from zero.
- If the fraction of x is exactly 0.5, then y = x − 0.5 if y is positive, and y = x + 0.5 if x is negative.
For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −23.
This method treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias towards zero.
Round half away from zero
The other tie-breaking method commonly taught and used is the round half away from zero (or round half towards infinity), namely:
- If the fraction of x is exactly 0.5, then y = x + 0.5 if x is positive, and y = x − 0.5 if x is negative.
For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −24.
This can be more efficient on binary computers because only the first omitted bit needs to be considered to determine if it rounds up (on a 1) or down (on a 0). This is one method used when rounding to significant figures due to its simplicity.
This method, also known as commercial rounding, treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias away from zero.
It is often used for currency conversions and price roundings (when the amount is first converted into the smallest significant subdivision of the currency, such as cents of a euro) as it is easy to explain by just considering the first fractional digit, independently of supplementary precision digits or sign of the amount (for strict equivalence between the paying and recipient of the amount).
Round half to even
A tie-breaking rule without positive/negative bias and without bias toward/away from zero is round half to even. By this convention, if the fractional part of x is 0.5, then y is the even integer nearest to x. Thus, for example, +23.5 becomes +24, as does +24.5; while −23.5 becomes −24, as does −24.5. This function minimizes the expected error when summing over rounded figures, even when the inputs are mostly positive or mostly negative.
This variant of the round-to-nearest method is also called convergent rounding, statistician's rounding, Dutch rounding, Gaussian rounding, odd–even rounding,[6] or bankers' rounding.
This is the default rounding mode used in IEEE 754 computing functions and operators (see also Nearest integer function), and the more sophisticated mode used when rounding to significant figures.
By eliminating bias, repeated rounded addition or subtraction of independent numbers will give a result with an error that tends to grow in proportion to the square root of the number of operations rather than linearly. See random walk for more.
However, this rule distorts the distribution by increasing the probability of evens relative to odds. Typically this is less important than the biases that are eliminated by this method.
Round half to odd
A similar tie-breaking rule is round half to odd. In this approach, if the fraction of x is 0.5, then y is the odd integer nearest to x. Thus, for example, +23.5 becomes +23, as does +22.5; while −23.5 becomes −23, as does −22.5.
This method is also free from positive/negative bias and bias toward/away from zero.
This variant is almost never used in computations, except in situations where one wants to avoid increasing the scale of floating-point numbers, which have a limited exponent range. With round half to even, a non-infinite number would round to infinity, and a small denormal value would round to a normal non-zero value. Effectively, this mode prefers preserving the existing scale of tie numbers, avoiding out-of-range results when possible for even-based number systems (such as binary and decimal).[clarification needed (see talk)]
Random-based rounding of an integer
Alternating tie-breaking
One method, more obscure than most, is to alternate direction when rounding a number with 0.5 fractional part. All others are rounded to the closest integer.
- Whenever the fractional part is 0.5, alternate rounding up or down: for the first occurrence of a 0.5 fractional part, round up; for the second occurrence, round down; and so on. (Alternatively the first 0.5 fractional part rounding can be determined by a random seed.)
If occurrences of 0.5 fractional parts occur significantly more than a restart of the occurrence "counting" then it is effectively bias free. With guaranteed zero bias, it is useful if the numbers are to be summed or averaged.
Random tie-breaking
- If the fractional part of x is 0.5, choose y randomly among x + 0.5 and x − 0.5, with equal probability. All others are rounded to the closest integer.
Like round-half-to-even and round-half-to-odd, this rule is essentially free of overall bias; but it is also fair among even and odd y values. The advantage over alternate tie-breaking is that the last direction of rounding on 0.5 fractional part does not have to be "remembered".
Stochastic rounding
Rounding as follows to one of the closest straddling integers with a probability dependent on the proximity is called stochastic rounding and will give an unbiased result on average.[7]
For example, 1.6 would be rounded to 1 with probability 0.4 and to 2 with probability 0.6.
Stochastic rounding is accurate in a way that a rounding function can never be. For example, say you started with 0 and added 0.3 to that one hundred times while rounding the running total between every addition. The result would be 0 with regular rounding, but with stochastic rounding, the expected result would be 30, which is the same value obtained without rounding. This can be useful in machine learning where the training may use low precision arithmetic iteratively.[7] Stochastic rounding is a way to achieve 1-dimensional dithering.
Comparison of approaches for rounding to an integer
Functional methods | Random based methods (average of 99 roundings) | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Value | Directed rounding |
Round to nearest |
Alternating tie-break | Random tie-break | Stochastic | |||||||||||
Round down (towards −∞) |
Round up (towards +∞) |
Round towards zero |
Round away from zero |
Round half down (towards −∞) |
Round half up (towards +∞) |
Round half towards zero |
Round half away from zero |
Round half to even |
Round half to odd |
μ | σmeans | μ | σmeans | μ | σmeans | |
+1.8 | +1 | +2 | +1 | +2 | +2 | +2 | +2 | +2 | +2 | +2 | +2 | 0 | +2 | 0 | +1.8 | 0.04 |
+1.5 | +1 | +1 | +1 | +1.505 | 0 | +1.5 | 0.05 | +1.5 | 0.05 | |||||||
+1.2 | +1 | +1 | +1 | +1 | 0 | +1 | 0 | +1.2 | 0.04 | |||||||
+0.8 | 0 | +1 | 0 | +1 | +0.8 | 0.04 | ||||||||||
+0.5 | 0 | 0 | 0 | +0.505 | 0 | +0.5 | 0.05 | +0.5 | 0.05 | |||||||
+0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | +0.2 | 0.04 | |||||||
−0.2 | −1 | 0 | −1 | −0.2 | 0.04 | |||||||||||
−0.5 | −1 | −1 | −1 | −0.495 | 0 | −0.5 | 0.05 | −0.5 | 0.05 | |||||||
−0.8 | −1 | −1 | −1 | -1 | 0 | -1 | 0 | −0.8 | 0.04 | |||||||
−1.2 | −2 | −1 | −1 | −2 | −1.2 | 0.04 | ||||||||||
−1.5 | −2 | −2 | −2 | −1.495 | 0 | −1.5 | −1.5 | 0.05 | ||||||||
−1.8 | −2 | −2 | −2 | -2 | 0 | -2 | 0 | −1.8 | 0.04 |
Rounding to other values
Rounding to a specified multiple
The most common type of rounding is to round to an integer; or, more generally, to an integer multiple of some increment — such as rounding to whole tenths of seconds, hundredths of a dollar, to whole multiples of 1/2 or 1/8 inch, to whole dozens or thousands, etc.
In general, rounding a number x to a multiple of some specified positive value m entails the following steps:
For example, rounding x = 2.1784 dollars to whole cents (i.e., to a multiple of 0.01) entails computing 2.1784/0.01 = 217.84, then rounding that to 218, and finally computing 218 × 0.01 = 2.18.
When rounding to a predetermined number of significant digits, the increment m depends on the magnitude of the number to be rounded (or of the rounded result).
The increment m is normally a finite fraction in whatever number system is used to represent the numbers. For display to humans, that usually means the decimal number system (that is, m is an integer times a power of 10, like 1/1000 or 25/100). For intermediate values stored in digital computers, it often means the binary number system (m is an integer times a power of 2).
The abstract single-argument "round()" function that returns an integer from an arbitrary real value has at least a dozen distinct concrete definitions presented in the rounding to integer section. The abstract two-argument "roundToMultiple()" function is formally defined here, but in many cases it is used with the implicit value m = 1 for the increment and then reduces to the equivalent abstract single-argument function, with also the same dozen distinct concrete definitions.
Logarithmic rounding
Rounding to a specified power
Rounding to a specified power is very different from rounding to a specified multiple; for example, it is common in computing to need to round a number to a whole power of 2. The steps, in general, to round a positive number x to a power of some specified integer b greater than 1, are:
Many of the caveats applicable to rounding to a multiple are applicable to rounding to a power.
Scaled rounding
This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified power. Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale.
For example, resistors are supplied with preferred numbers on a logarithmic scale. For example, for resistors with 10% accuracy they are supplied with nominal values 100, 121, 147, 178, 215 etc. If a calculation indicates a resistor of 165 ohms is required then log(147)=2.167, log(165)=2.217 and log(178)=2.250. The logarithm of 165 is closer to the logarithm of 178 therefore a 178 ohm resistor would be the first choice if there are no other considerations.
Whether a value x ∈ (a, b) rounds to a or b depends upon whether the squared value x2 is greater than or less than the product ab. The value 165 rounds to 178 in the resistors example because 1652 = 27225 is greater than 147 × 178 = 26166.
Floating-point rounding
In floating-point arithmetic, rounding aims to turn a given value x into a value y with a specified number of significant digits. In other words, y should be a multiple of a number m that depends on the magnitude of x. The number m is a power of the base (usually 2 or 10) of the floating-point representation.
Apart from this detail, all the variants of rounding discussed above apply to the rounding of floating-point numbers as well. The algorithm for such rounding is presented in the Scaled rounding section above, but with a constant scaling factor s = 1, and an integer base b > 1.
Where the rounded result would overflow the result for a directed rounding is either the appropriate signed infinity when "rounding away from zero", or the highest representable positive finite number (or the lowest representable negative finite number if x is negative), when "rounding towards zero". The result of an overflow for the usual case of round to nearest is always the appropriate infinity.
Rounding to a simple fraction
In some contexts it is desirable to round a given number x to a "neat" fraction — that is, the nearest fraction y = m/n whose numerator m and denominator n do not exceed a given maximum. This problem is fairly distinct from that of rounding a value to a fixed number of decimal or binary digits, or to a multiple of a given unit m. This problem is related to Farey sequences, the Stern–Brocot tree, and continued fractions.
Rounding to an available value
Finished lumber, writing paper, capacitors, and many other products are usually sold in only a few standard sizes.
Many design procedures describe how to calculate an approximate value, and then "round" to some standard size using phrases such as "round down to nearest standard value", "round up to nearest standard value", or "round to nearest standard value".[8][9]
When a set of preferred values is equally spaced on a logarithmic scale, choosing the closest preferred value to any given value can be seen as a form of scaled rounding. Such rounded values can be directly calculated.[10]
Rounding in other contexts
Dithering and error diffusion
When digitizing continuous signals, such as sound waves, the overall effect of a number of measurements is more important than the accuracy of each individual measurement. In these circumstances, dithering, and a related technique, error diffusion, are normally used. A related technique called pulse-width modulation is used to achieve analog type output from an inertial device by rapidly pulsing the power with a variable duty cycle.
Error diffusion tries to ensure the error, on average, is minimized. When dealing with a gentle slope from one to zero, the output would be zero for the first few terms until the sum of the error and the current value becomes greater than 0.5, in which case a 1 is output and the difference subtracted from the error so far. Floyd–Steinberg dithering is a popular error diffusion procedure when digitizing images.
As a one-dimensional example, suppose the numbers 0.9677, 0.9204, 0.7451, and 0.3091 occur in order and each is to be rounded to a multiple of 0.01. In this case the cumulative sums, 0.9677, 1.8881 = 0.9677 + 0.9204, 2.6332 = 0.9677 + 0.9204 + 0.7451, and 2.9423 = 0.9677 + 0.9204 + 0.7451 + 0.3091, are each rounded to a multiple of 0.01: 0.97, 1.89, 2.63, and 2.94. The first of these and the differences of adjacent values give the desired rounded values: 0.97, 0.92 = 1.89 − 0.97, 0.74 = 2.63 − 1.89, and 0.31 = 2.94 − 2.63.
Monte Carlo arithmetic
Monte Carlo arithmetic is a technique in Monte Carlo methods where the rounding is randomly up or down. Stochastic rounding can be used for Monte Carlo arithmetic, but in general, just rounding up or down with equal probability is more often used. Repeated runs will give a random distribution of results which can indicate how stable the computation is.[11]
Exact computation with rounded arithmetic
It is possible to use rounded arithmetic to evaluate the exact value of a function with integer domain and range. For example, if we know that an integer n is a perfect square, we can compute its square root by converting n to a floating-point value z, computing the approximate square root x of z with floating point, and then rounding x to the nearest integer y. If n is not too big, the floating-point round-off error in x will be less than 0.5, so the rounded value y will be the exact square root of n. This is essentially why slide rules could be used for exact arithmetic.
Double rounding
Rounding a number twice in succession to different levels of precision, with the latter precision being coarser, is not guaranteed to give the same result as rounding once to the final precision except in the case of directed rounding.[12] For instance rounding 9.46 to one decimal gives 9.5, and then 10 when rounding to integer using rounding half to even, but would give 9 when rounded to integer directly. Borman and Chatfield[13] discuss the implications of double rounding when comparing data rounded to one decimal place to specification limits expressed using integers.
In Martinez v. Allstate and Sendejo v. Farmers, litigated between 1995 and 1997, the insurance companies argued that double rounding premiums was permissible and in fact required. The US courts ruled against the insurance companies and ordered them to adopt rules to ensure single rounding.[14]
Some computer languages and the IEEE 754-2008 standard dictate that in straightforward calculations the result should not be rounded twice. This has been a particular problem with Java as it is designed to be run identically on different machines, special programming tricks have had to be used to achieve this with x87 floating point.[15][16] The Java language was changed to allow different results where the difference does not matter and require a strictfp qualifier to be used when the results have to conform accurately.
In some algorithms, an intermediate result is computed and rounded and then, after more computation, must be rounded to the final precision. Double rounding error can be avoided by choosing an adequate rounding precision for the intermediate computation and/or by using a proven rounding method for intermediate and final rounding.[17][18][19]
Table-maker's dilemma
William Kahan coined the term "The Table-Maker's Dilemma" for the unknown cost of rounding transcendental functions:
"Nobody knows how much it would cost to compute yw correctly rounded for every two floating-point arguments at which it does not over/underflow. Instead, reputable math libraries compute elementary transcendental functions mostly within slightly more than half an ulp and almost always well within one ulp. Why can't yw be rounded within half an ulp like SQRT? Because nobody knows how much computation it would cost... No general way exists to predict how many extra digits will have to be carried to compute a transcendental expression and round it correctly to some preassigned number of digits. Even the fact (if true) that a finite number of extra digits will ultimately suffice may be a deep theorem."[20]
The IEEE floating-point standard guarantees that add, subtract, multiply, divide, fused multiply–add, square root, and floating-point remainder will give the correctly rounded result of the infinite precision operation. No such guarantee was given in the 1985 standard for more complex functions and they are typically only accurate to within the last bit at best. However, the 2008 standard guarantees that conforming implementations will give correctly rounded results which respect the active rounding mode; implementation of the functions, however, is optional.
Using the Gelfond–Schneider theorem and Lindemann–Weierstrass theorem many of the standard elementary functions can be proved to return transcendental results when given rational non-zero arguments; therefore it is always possible to correctly round such functions. However, determining a limit for a given precision on how accurate results need to be computed, before a correctly rounded result can be guaranteed, may demand a lot of computation time.[21]
Some programming packages offer correct rounding. The GNU MPFR package gives correctly rounded arbitrary precision results. Some other libraries implement elementary functions with correct rounding in double precision:
- IBM's libultim, in rounding to nearest only.[22]
- Sun Microsystems's libmcr, in the 4 rounding modes.[23]
- CRlibm, written in the old Arénaire team (LIP, ENS Lyon). It supports the 4 rounding modes and is proved.[24]
There exist computable numbers for which a rounded value can never be determined no matter how many digits are calculated. Specific instances cannot be given but this follows from the undecidability of the halting problem. For instance, if Goldbach's conjecture is true but unprovable, then the result of rounding the following value up to the next integer cannot be determined: 10−n where n is the first even number greater than 4 which is not the sum of two primes, or 0 if there is no such number. The result is 1 if such a number exists and 0 if no such number exists. The value before rounding can however be approximated to any given precision even if the conjecture is unprovable.
Interaction with string searches
Rounding can adversely affect a string search for a number. For example, π rounded to four digits is "3.1416" but a simple search for this string will not discover "3.14159" or any other value of π rounded to more than four digits. In contrast, truncation does not suffer from this problem; for example, a simple string search for "3.1415", which is π truncated to four digits, will discover values of π truncated to more than four digits.
History
The concept of rounding is very old, perhaps older even than the concept of division. Some ancient clay tablets found in Mesopotamia contain tables with rounded values of reciprocals and square roots in base 60.[25] Rounded approximations to π, the length of the year, and the length of the month are also ancient—see base 60 examples.
The round-to-even method has served as the ASTM (E-29) standard since 1940. The origin of the terms unbiased rounding and statistician's rounding are fairly self-explanatory. In the 1906 fourth edition of Probability and Theory of Errors[26] Robert Simpson Woodward called this "the computer's rule" indicating that it was then in common use by human computers who calculated mathematical tables. Churchill Eisenhart indicated the practice was already "well established" in data analysis by the 1940s.[27]
The origin of the term bankers' rounding remains more obscure. If this rounding method was ever a standard in banking, the evidence has proved extremely difficult to find. To the contrary, section 2 of the European Commission report The Introduction of the Euro and the Rounding of Currency Amounts[28] suggests that there had previously been no standard approach to rounding in banking; and it specifies that "half-way" amounts should be rounded up.
Until the 1980s, the rounding method used in floating-point computer arithmetic was usually fixed by the hardware, poorly documented, inconsistent, and different for each brand and model of computer. This situation changed after the IEEE 754 floating-point standard was adopted by most computer manufacturers. The standard allows the user to choose among several rounding modes, and in each case specifies precisely how the results should be rounded. These features made numerical computations more predictable and machine-independent, and made possible the efficient and consistent implementation of interval arithmetic.
Rounding functions in programming languages
Most programming languages provide functions or special syntax to round fractional numbers in various ways. The earliest numeric languages, such as FORTRAN and C, would provide only one method, usually truncation (towards zero). This default method could be implied in certain contexts, such as when assigning a fractional number to an integer variable, or using a fractional number as an index of an array. Other kinds of rounding had to be programmed explicitly; for example, rounding a positive number to the nearest integer could be implemented by adding 0.5 and truncating.
In the last decades, however, the syntax and/or the standard libraries of most languages have commonly provided at least the four basic rounding functions (up, down, to nearest, and towards zero). The tie-breaking method may vary depending the language and version, and/or may be selectable by the programmer. Several languages follow the lead of the IEEE 754 floating-point standard, and define these functions as taking a double precision float argument and returning the result of the same type, which then may be converted to an integer if necessary. This approach may avoid spurious overflows since floating-point types have a larger range than integer types. Some languages, such as PHP, provide functions that round a value to a specified number of decimal digits, e.g. from 4321.5678 to 4321.57 or 4300. In addition, many languages provide a printf or similar string formatting function, which allows one to convert a fractional number to a string, rounded to a user-specified number of decimal places (the precision). On the other hand, truncation (round to zero) is still the default rounding method used by many languages, especially for the division of two integer values.
On the opposite, CSS and SVG do not define any specific maximum precision for numbers and measurements, that are treated and exposed in their DOM and in their IDL interface as strings as if they had infinite precision, and do not discriminate between integers and floating-point values; however, the implementations of these languages will typically convert these numbers into IEEE 754 double-precision floating-point values before exposing the computed digits with a limited precision (notably within standard JavaScript or ECMAScript[29] interface bindings).
Other rounding standards
Some disciplines or institutions have issued standards or directives for rounding.
US weather observations
In a guideline issued in mid-1966,[30] the U.S. Office of the Federal Coordinator for Meteorology determined that weather data should be rounded to the nearest round number, with the "round half up" tie-breaking rule. For example, 1.5 rounded to integer should become 2, and −1.5 should become −1. Prior to that date, the tie-breaking rule was "round half away from zero".
Negative zero in meteorology
Some meteorologists may write "−0" to indicate a temperature between 0.0 and −0.5 degrees (exclusive) that was rounded to integer. This notation is used when the negative sign is considered important, no matter how small is the magnitude; for example, when rounding temperatures in the Celsius scale, where below zero indicates freezing.[citation needed]
byte values
Because digital information is stored in binary format, computer memory is usually produced in sizes which are a power of 2. So a common size is 1024 bytes (210 bytes), and this can be referred to as 1 kilobyte. Similarly, 1,048,576 (220 bytes) can be referred to as 1 megabyte. This may or may not be considered rounding. More information on the different representations is on the entry for kilobyte.
See also
- Gal's accurate tables
- Interval arithmetic
- ISO 80000-1:2009
- Kahan summation algorithm
- Nearest integer function
- Truncation
- Signed-digit representation
- Swedish rounding, to avoid the use of coins of extremely low value
References
- ^ Isaiah Lankham, Bruno Nachtergaele, Desperate Housewives (season 5): Linear Algebra as an Introduction to Abstract Mathematics. World Scientific, Singapur 2016, ISBN 978-981-4730-35-8, S. 186.
- ^ Kulisch, Ulrich (July 1977). "Mathematical foundation of computer arithmetic". IEEE Transactions on Computers. C-26 (7): 610–621. doi:10.1109/TC.1977.1674893.
- ^ Nicholas J. Higham (2002). Accuracy and stability of numerical algorithms. p. 54. ISBN 978-0-89871-521-7.
- ^ "java.math.RoundingMode". Oracle.
- ^ "decimal — Decimal fixed point and floating point arithmetic". Python Software Foundation.
- ^ Engineering Drafting Standards Manual (NASA), X-673-64-1F, p90
- ^ a b Gupta, Suyog; Angrawl, Ankur; Gopalakrishnan, Kailash; Narayanan, Pritish (9 February 2016). "Deep Learning with Limited Numerical Precision". p. 3. arXiv:1502.02551.
- ^ "Zener Diode Voltage Regulators"
- ^ "Build a Mirror Tester"
- ^ Bruce Trump, Christine Schneider. "Excel Formula Calculates Standard 1%-Resistor Values". Electronic Design, January 21, 2002. [1]
- ^ Parker, D Stott; Eggert, Paul R.; Pierce, Brad (28 March 2000). "Monte Carlo Arithmetic: a framework for the statistical analysis of roundoff errors". IEEE Computation in Science and Engineering.
- ^ Another case where double rounding always leads to the same value as directly rounding to the final precision is when the radix is odd.
- ^ Borman, Phil; Chatfield, Marion (10 November 2015). "Avoid the perils of using rounded data". Journal of Pharmaceutical and Biomedical Analysis. 115: 506–507. doi:10.1016/j.jpba.2015.07.021.
- ^ Deborah R. Hensler (2000). Class Action Dilemmas: Pursuing Public Goals for Private Gain. RAND. pp. 255–293. ISBN 0-8330-2601-1.
- ^ Samuel A. Figueroa (July 1995). "When is double rounding innocuous?". ACM SIGNUM Newsletter. 30 (3). ACM: 21–25. doi:10.1145/221332.221334.
- ^ Roger Golliver (October 1998). "Efficiently producing default orthogonal IEEE double results using extended IEEE hardware" (PDF). Intel.
- ^ Moore, J Strother; Lynch, Tom; Kaufmann, Matt (1996). "A mechanically checked proof of the correctness of the kernel of the AMD5K86 floating-point division algorithm" (PDF). IEEE Transactions on Computers. 47. CiteSeerX 10.1.1.43.3309. doi:10.1109/12.713311. Retrieved 2016-08-02.
- ^ Boldo, Sylvie; Melquiond, Guillaume (2008). "Emulation of a FMA and correctly-rounded sums: proved algorithms using rounding to odd" (PDF). HAL - Inria. doi:10.1109/TC.2007.70819. Retrieved 2016-08-02.
- ^ "21718 – real.c rounding not perfect". gcc.gnu.org.
- ^ Kahan, William. "A Logarithm Too Clever by Half". Retrieved 2008-11-14.
- ^ Muller, Jean-Michel; Brisebarre, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Stehlé, Damien; Torres, Serge (2010). "Chapter 12: Solving the Table Maker's Dilemma". Handbook of Floating-Point Arithmetic (1 ed.). Birkhäuser. doi:10.1007/978-0-8176-4705-6. ISBN 978-0-8176-4704-9. LCCN 2009939668.
- ^ "libultim – ultimate correctly-rounded elementary-function library".
- ^ "libmcr – correctly-rounded elementary-function library".
- ^ "CRlibm – Correctly Rounded mathematical library". Archived from the original on 2016-10-27.
- ^ Duncan J. Melville. "YBC 7289 clay tablet". 2006
- ^ "Probability and theory of errors". historical.library.cornell.edu.
- ^ Churchill Eisenhart (1947). "Effects of Rounding or Grouping Data". Selected Techniques of Statistical Analysis for Scientific and Industrial Research, and Production and Management Engineering. New York: McGraw-Hill. pp. 187–223. Retrieved 30 January 2014.
{{cite book}}
: Unknown parameter|editors=
ignored (|editor=
suggested) (help) - ^ http://ec.europa.eu/economy_finance/publications/publication1224_en.pdf
- ^ "ECMA-262 ECMAScript Language Specification" (PDF). ecma-international.org.
- ^ OFCM, 2005: Federal Meteorological Handbook No. 1 Archived 1999-04-20 at the Wayback Machine, Washington, DC., 104 pp.
External links
- Weisstein, Eric W. "Rounding". MathWorld.
- An introduction to different rounding algorithms that is accessible to a general audience but especially useful to those studying computer science and electronics.
- How To Implement Custom Rounding Procedures by Microsoft