Jump to content

Talk:Fixed-point arithmetic

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Usage for speed

[edit]

My understanding was that games and other graphical applications might use fixed point for speed even when the architecture *does* have an FPU. Perhaps someone who knows about this could add a sentence or two. —The preceding unsigned comment was added by 11:47, 6 July 2005 (talkcontribs) 128.40.213.241.

Standard notation

[edit]

Is the standard notation for fixed point "1.15" or "Q1.15" ? —The preceding unsigned comment was added by DavidCary (talkcontribs) 06:53, 24 October 2005 (UTC1)

In Texas Instruments documentation I have only seen Qn notation where n is the fractional bit count. —The preceding unsigned comment was added by Petedarnell (talkcontribs) 23:28, 24 August 2006 (UTC1)

TI describes their use of "Qm.f" notation in their libraries here: http://focus.ti.com/lit/ug/spru565b/spru565b.pdf, Appendix A.2 "Fractional Q Formats", pg. A-3.

Note, that TI makes the sign bit implied. I.e. the word length (in bits) is m+f+1. This differs from the description in the main page which seem to say that the word length is m+f.

Would the someone kindly verify if this difference is truly a difference in common usage? Bvacaliuc 18:35, 14 October 2006 (UTC)[reply]


Hello all

If we are talking about a general fixed-point notation, please be aware that fixed-point notation is often used in the context of hardware descriptions (e.g., VHDL, Verilog): I would strongly prefer a concise notation with a clear definition of the sign bit. According to this thread:

  • TI: implicit additional sign bit, i.e.,
 signed Q1.14 has 16 bits
 unsigned Q1.14 has 15 bits
  • Matlab: implicit sign bit, i.e.,
 q = quantizer('fixed', [16 14], 'round', 'saturate') corresponds to signed Q1.14
 q = quantizer('ufixed', [15 14], 'round', 'saturate') corresponds to unsigned Q1.14

Both notations, Q1.14 and [16 14], are not entirely clear. Personally, I heavily used Matlab with the above notation for fixed-point characterization of VHDL blocks. I do not prefer one of both, but the Matlab notation has the slight advantage that you instantly know the total word width, without requiring any additional information.

best regards, Peter, 2009/07/21 —Preceding unsigned comment added by 217.162.95.96 (talk) 13:02, 21 July 2009 (UTC)[reply]


Hi! Q15 is the same as Q1.15. If the number of integer bits is 1 then it can be omitted because it's the most commonplace form of fixed-point arithmetic. That's when signal gets values from -1.000 to a little less than 1.000 . That's massively useful in signal processing, where the magnitudes of signals are traditionally represented this way. That again is due to that it's the native format in which the integer alu of a 16-bit processor (such as the 8086) calculates when using signed integers. Early computers had no floating point logic thus the signal processing had to be done using integer computers. For instance in the 8086 you would do a unsigned Q16 multiplication using the MUL command. It takes the first operand from AX (in 16-bit mode), the second operand from general purpose register or memory and stores the result in DX AX (DX has the high 16 bits and AX has the low 16 bits (16 bit number multiplied by 16 bit number is 32 bit number)). So I would calculate:

 MOV AX, (first Q15 number)
 MOV BX, (second Q15 number)
 MUL BX
 ;-- the result is now in DX, I can copy it from there somewhere.

Since by selecting the 16 leftmost bits (DX) as results, I get the shift-by-16 free and my result is thus in Q16 form. Try it! Great fun! I'd imagine this is how the fixed system was "discovered".

What really bugs me is that there doesn't seem to be a standard for differentiating between signed and unsigned fractionals. The only relevance that can be trusted is that the Q notation tells the number of fractional bits. So Q15 means that there are 15 bits after the decimal point (well actually it should be called the binary point). It doesn't globally guarantee anything else about the internal representation. For instance the internal representation could be two's complement (common) or sign+magnitude (uncommon, except in floating point units). But that's not so bad since you seldom need to know the internal representation: It does not matter in which format the data is inside the computer if you don't need to transfer from one computer to another in raw form. It guarantees that you have 15 fractional bits and that's it. Deep down in the depthness of the machine you might even have more (for instance if the hardware is 32-bit for instance) but that should not matter.

If you're really into it, here's an article about fixed point and TI dsp's: http://www.mathworks.com/access/helpdesk/help/toolbox/tic2000/f1-6350.html

Somebody put something of the above to the article. I've grown tired of edit wars.


P.S. By the way, the line "1:8:23 // signed 8-bit fixed point with 24 bit fractional, the IEEE 754 format (citation needed)" is total crap since IEEE_754 is a floating point, not fixed point standard.


-Panze 91.153.20.52 06:13, 9 May 2007 (UTC)[reply]

a few wiki pages that mention fixed-point arithmetic

[edit]

Fixed-point arithmetic http://wiki.tcl.tk/12166

Computers and real numbers http://wiki.tcl.tk/11969

Portable Fixed Point http://pixwiki.bafsoft.com/wiki/index.php/Portable_Fixed_Point

http://en.wikibooks.org/wiki/Handbook_of_Descriptive_Statistics/Measures_of_Statistical_Variability/Variance

Inkscape renderer issues http://wiki.inkscape.org/cgi-bin/wiki.pl?action=browse&diff=1&id=RendererIssues

other fixed-point pages

[edit]

Getting a Speed Boost with Fixed-Point Math by Daniel W. Rickey http://www.mactech.com/articles/mactech/Vol.13/13.11/Fixed-PointMathSpeed/

The "double blend trick" by Michael Herf

Developing Smartphone Games by Andy Sjostrom of businessanyplace, 2003-01: For games on the ARM processor, use fixed point math, not floating point math.

"fixed-point" vs. "fixed point" and "floating-point" vs. "floating point"

[edit]

Should not these to be consistent? English is not my native language so somebody please help out. I find it strange that wikipedia sticks to "fixed-point arithmetic" and "floating point". Velle 12:26, 25 March 2006 (UTC)[reply]

Look at the talk page for Floating point in the section called "hyphen or not?". —The preceding unsigned comment was added by 24.26.195.76 (talkcontribs) 05:53, 28 May 2006 (UTC1)

Modern games

[edit]

Tom Forsyth is a respected game developer. Here he explains why even modern games may prefer fixed point over floating point, for reasons of precision rather than speed: A matter of precision. —The preceding unsigned comment was added by 86.133.150.83 (talkcontribs) 12:02, 3 October 2006 (UTC1)

His message: use more bits; and if you want speed use integer ratios. With more bits a floating point number has no hardware support or is slower. It is a very good article. As I said below if the mantissa is the same length as the integer and you restrict yourself as you have to when using integer ratios; they both have the same accuracy. Charles Esson 21:56, 7 April 2007 (UTC)[reply]

Name Binary scaling

[edit]

It should have this name because that is what it is refferred to in old embedded assembler programs. —The preceding unsigned comment was added by 217.154.59.122 (talk) 14:05, 19 February 2007 (UTC).[reply]

Then have Binary scaling redirect to fixed-point arithmetic, and note that fixed-point arithmetic is also known as binary scaling in old embedded assembler programs. These are clearly the same technique, and it is confusing to have two different pages with different descriptions. 66.45.136.143 06:29, 28 March 2007 (UTC)[reply]

No, that won't work because there is lots of old code that still uses B notation. If you re-wrote the fixed point section to accurately include how B notation is used, and how angles use it (including working out transendentals using B0) then it could be merged. But you would need someone who knew what they were doing or you could get it wrong (and there is enough 'wrong' stuff on wiki anyway, don't want any more of that ! 81.106.115.105 (talk) 23:34, 5 January 2009 (UTC)[reply]

The general case is integer based arithmetic

[edit]

(a/base * b/base)/base = a*b/base

((a/base)*base/(b/base)) = (a/b)/base

a/base + b/base = (a+b)/base

etc.

This can speed things up using q numbers; that is make the base a power of 2. The formulas then become

(a/base * b/base)/shift_rightQ = a*b/base

((a/base)shift_leftQ/(b/base)) = (a/b)/base

a/base + b/base = (a+b)/base

I think Q numbers and binary scaling are the same thing being described with different words ( but I am not sure because the purist might say Q numbers have to be signed magnitude). You need to be a little careful with fixed point arithmetic; it is only the same if the radix is binary. Fixed point using a decimal radix would be a different animal.

The comment on floating point not being as accurate is wrong; if you place the same restrictions on your floating point calculations as you have to place on your calculations when doing fixed point maths and the mantissa of the floating point is of the same length as the integer the results are the same. If the floating point number has a longer mantissa than the length of the integer being used for the integer based maths the floating point result will be better. Further many integer calculations get done without proper rounding ( it still matters) the floating point unit generally does it properly.

>> Charles, for the same # of bits, say 32, fixed point is more accurate since all 32 bits of precision is used. 32 bit Floating point will only use 24 bits of precision with 8 bits of exponent. Pete Darnell 216.204.127.98 22:28, 31 May 2007 (UTC)[reply]

The article asked for a comment :-) Charles Esson 11:18, 6 April 2007 (UTC)[reply]

And looking at the links in my comment there needs to be a computer article on floating point; the current entry doesn't come close to dealing with the topic (and it is a interesting topic) . Just checked IEEE 754 the fractional part is called the mantissa; so the linked page for mantissa also has some issues.Charles Esson 11:36, 6 April 2007 (UTC)[reply]

IEEE 754 (and the current revision, IEEE 754r) both use the term significand for the fractional part, not mantissa. mfc (talk) 12:10, 29 March 2008 (UTC)[reply]

Special case

[edit]

No, integers (radix point immediately to the right of the LSD) are the special case, where fixed point is the general case. It seems, maybe tracing back to Fortran, computer languages supplying fixed point values with scaling other than 0 (PL/I notation) are rare. Note also that PL/I allows the scale factor, that is the digits to the right of the radix point, to be negative. In that case, they are still integers in the math sense, but it seems not in the CS sense. There is no convenient written notation for values with a negative number of digits after the decimal point. Gah4 (talk) 17:44, 10 April 2020 (UTC)[reply]

It should not be merged

[edit]

The main article fixed point arithemetic is a confused presentation of binary based fixed point stuff; the examples in the section Current common uses of fixed-point arithmetic give examples in binary and decimal base fixed point stuff.

I have to do more reading to make sure but I think the general thrust should be:

Fixed point arithmetic is the general set presented in a way that supports binary and decimal radix. Binary scaling and Q numbers have a binary base. Binary numbers probable should go to Q number. I think Q numbers need to describe the format Qn.n at the moment it only mentions Qn which is a subset of the former. Charles Esson 21:37, 9 April 2007 (UTC)[reply]

I'm confused. What should not be merged? --68.0.124.33 (talk) 03:37, 28 March 2008 (UTC)[reply]

Please don't merge the Q format article into the Fixed Point article. The Q format article is substantial and specific to the Q format, not a stubby article full of general fixed-point information. It deserves to stand on its own. I found it very useful as-is. RolfeDH 12:09, 4 Sept 2008 (UTC)

There should not be a merge. Not of any of the articles. That they are/are not the same part of math. is irrelevant. Wiki is not a 'Learned Journal' rather it is a multi-linked learning resource. All that is needed is a link to articles on the 'other parts' of the subject. I found it all via a Qn.Qn enquiry. Now I know it's called Fixed-point. 94.172.52.241 (talk) 17:26, 29 January 2010 (UTC)[reply]

I don't think the articles should be merged, for several reasons. First, the resulting article would simply be too big. IMHO one of the benefits of the wiki/hyperlink approach is that you can write a series of fairly short, succinct articles on related subjects and then cross-link them. Articles that are pages long with 25 sub-sections seem to defeat the whole purpose of a general reference encyclopedia.

Secondly, I think the current articles already confuse several issues. "Fixed Point" is a generic term for any finite numeric representation system in which the position of the radix (base) point is not explicitly included as part of the representation itself. One way of categorizing fixed point representation schemes is based on the particular radix/base that is used. For historical reasons, over the last 50 years the most important practical application of fixed point numeric representation has been in the field of computer science where for technical reasons (existing digital logic circuit design constraints) the radix/base used has overwhelmingly been 2; however, it is important to begin any discussion of fixed point by pointing out that it is a concept that can be applied to any radix. It is very possible that in the near future, digital logic gate design constraints may change, resulting in hardware that is capable of operating efficiently in other radices (three, four or more). For historical reasons, the bulk of the discussions should focus on binary fixed point and its applications, but the intro should not confuse the reader that this is the only meaning of the term "fixed point."

"Q Format" is a specific binary (radix=2) fixed point representation scheme that was developed and promoted by a particular company (Texas Instruments) in a particular context (embedded systems). It has specific historical connotations that should probably be clearly explained (independent of the subject of fixed point representation in general). I'm not familiar with the term "B format", but to the extent that it was a precursor to the widespread adoption of TI's Q format it should either be discussed as part of that article or given its own article.

"Binary Fixed Point Arithmetic" is a sub-set of the more general topic of fixed point numeric representation. It deals with the arithmetic operators (+, -, *, /) and their specific properties when applied to numbers represented using the fixed point representation scheme. This article should address issues such as overflow that occur as the result of applying arithmetic operators to fixed point representations. I would argue that the topic of "scaling" (which is a technique that is often used in conjunction with the application of arithmetic operators to fixed point representations schemes as part of implementing an algorithm) should be treated as a separate subject (see below).

"Scaling" (sometimes referred to as "Slope-Bias Scaling" is a very general technique that can be used to map a given set of real values (with a particular range and precision) onto a particular fixed point numeric representation scheme using the following formula: real-world value = (slope x integer) + bias, where the slope can be expressed as, slope = (slope adjustment contant) x radix^exponent. It is the key step in determining a) whether a particular real world requirement (range and precision) can be satisfied by a particular finite fixed point representation scheme (i.e. do you have enough bits to cover the required range at the required precision?) and b) being able to convert between and adjust fixed point representation schemes "on the fly" in order to avoid overflow while preserving precision.

Again, the term "Binary Scaling" refers to the specific case where the radix/base of the underlying fixed point numeric representation scheme = 2. From a historical perspective this is the most significant application and should thus be the focus of the article (without losing sight of the fact that it is a special case of the more general concept of scaling).

Due to the significant conceptual differences between "scaling" and "arithmetic" I would argue that they should be treated as separate (but related) subjects. The purpose of scaling operations (primarily shift and add) are to establish and maintain the range and precision of the underlying fixed point numeric representation; whereas, the purpose of the more general arithmetic (and logic) operations (+, -, *, /, AND, OR, XOR, etc.) are to actually perform the desired computation. In the course of performing a series of arithmetic operations on a fixed point numeric representation (as part of an algorithm) a programmer may be required to recognize the need for, and understand how to apply, additional scaling operations in order to avoid a problem like overflow. Lumping the two topics together makes it harder for the reader to understand this important distinction.

Enderz Game (talk) —Preceding undated comment added 15:56, 3 June 2011 (UTC).[reply]

merge

[edit]

I suggest merging both binary scaling and Q (number format) into the fixed-point arithmetic article. As far as I can tell, they are all identical. So they should all be discussed in one article, with redirects from the other name -- like puma and mountain lion. --68.0.124.33 (talk) 03:37, 28 March 2008 (UTC)[reply]

As Charles Esson mentions above, binary scaling and Q (number format) are not synonyms for fixed-point arithmetic (although possibly they are synonyms for each other and should be merged). They are a special case (radix two) of fixed-point. The fixed-point article should probably have radix-dependent aspects moved to separate articles, too (with binary going into one of those other two.
So I'd suggest a way forward might be:
  1. Merge binary scaling and Q (number format) into binary fixed-point
  2. Merge binary aspects of fixed-point arithmetic into binary fixed-point
  3. Create a decimal fixed-point (perhaps)
  4. clean up fixed-point arithmetic so it is radix-independent.
mfc (talk) 12:16, 29 March 2008 (UTC)[reply]
You're right -- binary scaling is not an exact synonym for fixed-point. Binary scaling is one of several kinds of fixed-point.
I agree that someday, there might be enough information to warrant 3 articles: "binary fixed-point", "decimal fixed-point", and radix-independent "fixed-point arithmetic".
But I think a better way forward is to use the "big buckets first" technique:
  1. Merge all three articles into one big "fixed-point arithmetic" article. That name does cover all these more specific idea, right? I think it is fine for information about one particular kind of thing to not have its own article, when that information is included in the more general article -- the way that "cornbread muffins" don't have their own article, but are instead mentioned in the muffin article.
  2. It's much easier to move text from the "binary" section to the "radix-independent" section and back again (and spot repetitive redundancies) when they are sections in one big article than across several articles. So leave everything in one big article for a few months while we clean it up and re-organize the sections.
  3. If and when the article gets "too big" (WP:SIZE), then split out more specific articles as you suggested -- or perhaps (as suggested by the "big buckets first" essay) by that time the article will have some other clear-cut section divisions that would be even better.
--68.0.124.33 (talk) 20:28, 2 April 2008 (UTC)[reply]
Support the merge. Besides, I have seen no reference that "Q number format" is an estabished name for binary fixed-point. --Jorge Stolfi (talk) 18:25, 24 June 2009 (UTC)[reply]


I support the merge. Without the overlapping content the other articles will become short subsections, decreasing a reader's total time. Ray Van De Walker 23:04, 21 May 2011 (UTC)

Scaling factor: 100 or 1/100?

[edit]

I rewrote the sections "Representation" and "Operations" assuming the the "scale factor" if the number that must be multiplied by the underlying integer to give the intended value. In this interpretation, a binary number with b-bits, of which f are fraction bits, has a scale factor of 1/2f. When one stores a dollar amount as an integer number of cents, the scaling factor would be 1/100.

However it seems that the rest of the article assumes the opposite interpretation, namely the intended value times the scaling factor is the underlying integer. In the above examples, the binary scaling factor is 2f and the dollar scaling factor is 100.

I must now fix the inconsistency, but for that I need to know which interpretation of "scaling factor" is the commonly used one. (If both are used in different contexts, we must warn the reader about that). Note that a "factor" is a operand of a multiplication, not of a division. All the best, --Jorge Stolfi (talk) 23:37, 9 February 2010 (UTC)[reply]

For the sake of clarity, call 2f a "denominator" because that's what it is. --Damian Yerrick (talk | stalk) 21:25, 19 September 2010 (UTC)[reply]
Seems to me that the ambiguity goes pretty far. If someone gives a number, and we say "you are off by a factor of three", it could be either way. If we say "you are off by three", that means (without saying which) plus or minus three. But as for actual scaled arithmetic, fractional cases (radix point moved to the left) is more common than the other way. PL/I calls "scale" the number of positions the radix point (radix 2 or 10) moves left, or in more common usage, s digits after the radix point. (Where the actual value can be positive or negative.) Note that we have decimal fraction notation for shifting the decimal point left, but no common notation for the other way. Also, Q notation is the (positive) number of bits after the binary point. So, the scale factor should be (radix) to the power (digits after the radix point). How far off is the article? Gah4 (talk) 06:59, 18 June 2021 (UTC)[reply]

Tautology in summary

[edit]

The summary states: "Fixed-point numbers are useful for representing fractional values, usually in base 2 or base 10, when the executing processor has no floating point unit (FPU) or if fixed-point provides improved performance or accuracy for the application at hand. Most low-cost embedded microprocessors and microcontrollers do not have an FPU." The third clause is a tautology. It's saying "Fixed-point numbers are useful if fixed-point provides improved performance or accuracy for the application at hand". Asmor (talk) 16:50, 1 May 2012 (UTC)[reply]

Seems close enough for me. According to Knuth, fixed point should be used in finance and typesetting, in both cases not because it is faster or even more accurate, but because it is more dependable (that is, consistent). TeX uses fixed point for all quantities related to actual typeset output. Floating point is used in some error messages that don't affect the typeset results. This guarantees that the same input will generate the same typeset output on all processors. But mostly, fixed point is used for values that have an absolute uncertainty, where floating point is best for those with relative uncertainty. Gah4 (talk) 17:15, 1 November 2019 (UTC)[reply]

new merge request

[edit]

It seems that there is a new merge request, even though there are two sections on merge request above. Gah4 (talk) 17:04, 1 November 2019 (UTC)[reply]

  • OPPOSE: The Q page is specific for binary scaling, and includes some more specific details used in that base. This article is somewhat, though not completely, radix independent. I believe that there is enough for two separate article of reasonable length. Gah4 (talk) 17:04, 1 November 2019 (UTC)[reply]
  • OPPOSE: The Q page should get merged with binary scaling. No idea why nobody got that done 10 years ago. --Artoria2e5 🌉 14:38, 16 January 2020 (UTC)[reply]
I suppose so, but binary scaling could use some work. It seems to be written such that scaled fix point is the poor substitute for floating point. That might be sometimes true, but even more fixed point should be used for quantities with an absolute uncertainty, and floating point for quantities with relative uncertainty. Or maybe binary scaling should merge with the Q page. (I will have to look at the latter.) Gah4 (talk) 15:42, 16 January 2020 (UTC)[reply]

complex

[edit]

I notice that the introduction to this article mentions real data type, which seems to exclude a complex data type. PL/I, at least, supports fixed point complex data, with either decimal or binary scaling. Does the article need to limit it to real? Gah4 (talk) 17:06, 1 November 2019 (UTC)[reply]

hint

[edit]

The article seems to hint at, but not actually say, that the scale factor can have a different radix from the underlying representation. Most commonly, this is representing values with decimal scaling in binary, such as monetary values in cents, as binary integers. Using the same radix allows for shifts instead of multiply and divide whenever the scaling changes. Gah4 (talk) 14:11, 8 April 2020 (UTC)[reply]

This article is mostly wrong. Ideally, there should be a new page entitled "scaled arithmetic" and the fixed-point page should state "fixed-point arithmetic is a special case of scaled arithmetic where the scaling factor uses the radix of the underlying representation". Note that I say "mostly wrong" because (hypothetically) an arbitrary scale can be called "fixed point" if a system supports an arbitrary radix (and if the decimal point is not in the middle of a digit in that arbitrary radix); but any attempt to make that claim would be stretching the boundaries of "common usage" beyond its breaking point. 101.174.9.107 (talk) 21:55, 17 June 2021 (UTC)[reply]
I suppose so, but even so, I don't think it needs its own article. And if it did, someone would put in a merge request. Even more, we commonly expect the radix to be integer, but that isn't required. Fortran allows for the underlying arithmetic, for both fixed and floating point, to be any integer greater than one. There is a claim that the optimal radix is e, which rounds to 2 or 3. Also, you should use the more general term radix point. Mostly, for such small differences, the user is assumed to be able to figure it out. If the scale factor is an integer power of the radix, then scaling (such as needed for arithmetic) is done with shifting. Otherwise it is done with multiply or divide, and so less convenient. There is supposed to have been a Soviet trinary computer, otherwise they are usual binary or decimal, and scale factors usually (integer) powers of 2 or 10, as far as actually implemented in hardware or computer languages. Note also that IEEE 754 includes decimal formats, storing the significand in either Densely packed decimal or as a binary integer, the former more convenient in hardware, the latter in software. Gah4 (talk) 07:40, 18 June 2021 (UTC)[reply]

When do they teach in school bases other than 10? I was in school during the New Math years, when we learned about different bases in 4th grade. I suspect that is gone now. Even so, though, they teach decimal fractions in some grade, but only whole numbers in other bases. The extension to fractions in other bases isn't so hard, though, but is it taught at all, in any grade? Gah4 (talk) 18:27, 9 April 2020 (UTC)[reply]

floating point

[edit]

OK, now that the merge is done, we can go over the whole thing together. A quick look through it, it seems to mention floating point in places where floating point isn't being discussed, I suspect that is because too often the usage is wrong, and even that it is often taught wrong in schools. Too often, and even in this article, floating point means anything that isn't integer. But it is only floating if it actually floats. Fixed point decimal is still fixed point, the subject of this article. Gah4 (talk) 01:58, 25 June 2021 (UTC)[reply]

Good point. I have now improved some of the wording from the "Binary scaling" article: https://wiki.riteme.site/w/index.php?diff=1030295346&oldid=1030289681&title=Fixed-point_arithmetic&diffmode=source Solomon Ucko (talk) 02:12, 25 June 2021 (UTC)[reply]

Merging and restructuring proposal

[edit]

Sorry for not noticing the prior discussion as to whether or not to merge... I saw https://wiki.riteme.site/wiki/Talk:Fixed-point_arithmetic#Name_Binary_scaling, as well as a few revisions on Binary scaling: https://wiki.riteme.site/w/index.php?diff=93645268&oldid=83052273&title=Binary_scaling&diffmode=source, https://wiki.riteme.site/w/index.php?diff=135233978&oldid=124073487&title=Binary_scaling&diffmode=source, https://wiki.riteme.site/w/index.php?diff=201494301&oldid=185846280&title=Binary_scaling&diffmode=source for prior discussion However, I missed https://wiki.riteme.site/wiki/Talk:Fixed-point_arithmetic#It_should_not_be_merged, https://wiki.riteme.site/wiki/Talk:Fixed-point_arithmetic#merge, and https://wiki.riteme.site/wiki/Talk:Fixed-point_arithmetic#new_merge_request.

So far, I merged Binary scaling into its own section in this article (besides categories and See also, which got merged into the corresponding parts of this article), as well as adding a few small improvements to its wording (in a separate edit). Feel free to / let me know if I should revert any of my edits. (Including the replacement of the source page, merge banners on the talk pages, and a couple of link edits.)

I propose also merging in Q (number format) like so:

  • Copy the intro and everything in Characteristics starting from "There are two conflicting notations" and stopping before "Unlike floating point numbers" into Notation
  • Copy the rest of Characteristics into the intro?
  • Copy Conversion into Representation
  • Copy Math operations into Operations
  • Merge See also (not needed in this case), Further reading, External links, and Categories (not needed in this case)
  • Make sure references are dealt with properly
  • In a separate edit, improve the wording to make things flow better.
  • Do the rest of the steps listed in Wikipedia:Merging
  • In another edit, probably split up the Binary scaling section and merge with other sections
  • Afterwards, in another edit, probably split out / demarcate text/sections specific to binary or decimal, and possibly even move them into separate articles if there is enough content.

I can do it unless anyone else wants to do it or objects to it being done.

Pinging anyone involved in prior discussion, for feedback/comments: @217.154.59.122, 66.45.136.143, 81.106.115.105, 68.0.124.33, RolfeDH, 94.172.52.241, Enderz Game, Mfc, Jorge Stolfi, Ray Van De Walker, Gah4, Artoria2e5, Whoop whoop, and Radagast83:

Please let me know if there's anything I should or shouldn't be doing, or should be doing differently, and I apologize if I made any mistakes. This is my first time doing a merge.

Solomon Ucko (talk) 02:58, 25 June 2021 (UTC)[reply]

Oops/sorry, missed Charles Esson Solomon Ucko (talk) 03:00, 25 June 2021 (UTC)[reply]

To clarify, the reason why I am proposing to merge Binary scaling and Q (number format) into Fixed-point arithmetic is because I felt/feel like the 3 articles, while by title they didn't necessarily have to, had/have a lot of overlap with each other, without making it clear that they were basically the same thing (at least with Binary scaling). At least in my opinion, both should each get mentioned in the Notation section, and the rest is just separating about binary- and decimal-specific things from base-independent things. I'm not sure those need separate articles, though separate articles for binary- and decimal- floating point could be reasonable (though I'm not sure if there's enough content for that to be worth it). Solomon Ucko (talk) 03:24, 25 June 2021 (UTC)[reply]

Q notation

[edit]

Now that all is merged here, I believe that some notations could have their own page. Note that there are two different things. One is the actual meaning of scaled fixed point arithmetic. That is, with the radix point not immediately to the right of the LSD. Separately is the way to describe the form in use, specifically the number of digits, number after the radix point (possibly negative), and in some cases the representation of negative numbers. Q notation (I have no idea why it is Q) indicates a binary representation with specific numbers of bits before and after the binary point. PL/I uses FIXED BINARY(p,q) and FIXED DECIMAL(p,q) where p is the total digits (not including sign) and q is digits after the radix point. There might be some things to say about the notation, like why it is called Q, who used it, and when. Otherwise, so far, I think this is fine. Gah4 (talk) 06:17, 25 June 2021 (UTC)[reply]

OK, I suppose I am happy with it now. The Q section here is pretty small. The Q notation page might do with fewer examples, but maybe it is fine. Gah4 (talk) 06:44, 25 June 2021 (UTC)[reply]
I haven't touched Q (number format) at all yet. Should I merge it (either as I proposed in Talk:Fixed-point arithmetic#Merging and restructuring proposal or some other way) or leave it as-is? (not necessarily asking anyone in particular, just trying to figure out what the consensus is.) Solomon Ucko (talk) 02:08, 26 June 2021 (UTC)[reply]
I think it is right now. With the introduction here, and the other article with more details. I don't think it should merge, but if there are already enough details here, they could be removed from Q (number format) and linked to here. Some is specific to Q, like it is mostly used for 16 and 32 bit numbers, even though there is no reason for that. (Other than that is the way people build hardware.) Gah4 (talk) 00:16, 27 June 2021 (UTC)[reply]

@Gah4 and Sollyucko: I tried to clean up this article, sorting the information by sections, simplifying the examples, removing duplicate information, and removing unnecesary "jargon" like the notations Qm.n and Bm, while expanding or generalizing some details. Please check.
I also merged here or deleted much of the stuff in Q (number format) that belonged here. The main contents that is still there is the C code for saturation arithmetic with inputs and outputs in the same format. They do not belong in that article, but I wonder whether they are worth bringing over here.
I also moved Binary angular measurement (BAM) to a separate article, because its features are specific to angular measurements and not of fixed-point formats in general.
--Jorge Stolfi (talk) 02:46, 6 July 2021 (UTC)[reply]

Fixed-Point Arithmetic Definition

[edit]

I think the current definition of fixed-point arithmetic given in the first sentence is too clumsy, at a minimum, and even inaccurate. I do not think it is fundamentally about the number of digits before or after the radix point, but rather is the representation of a set of fixed-point rationals. Yates (talk) 01:05, 4 July 2021 (UTC)[reply]

@Yates: Thanks. I added a sentence with that broader sense of the term. --Jorge Stolfi (talk) 03:45, 6 July 2021 (UTC)[reply]
Most important is the fixed number of digits after the radix point. In computer usage, that most often means a fixed number before, but I suppose not always. Pricing things in (larger monetary unit) and cents, the number of digits to the left is potentially unlimited, though practically by the size of the world economy. Gah4 (talk) 21:24, 6 July 2021 (UTC)[reply]

Precision Versus Speed Comment Not Accurate

[edit]

I take exception to the assertion in the first part of the page: "in applications that demand high speed more than high precision, like image, video, and digital signal processing;"

Bit-for-bit, fixed-point arithmetic can be more precise. That is because several of the bits in a floating-point word (in IEEE-754) are used for things other than the mantissa. For example, a single-precision floating point number requires 32 bits, and 32-bit signed, fixed-point numbers that are scaled at Q25 or greater are more precise. A common technique for fixed-point arithmetic implementations is to keep results in a Q1.31 format, so that values are always between -1 <= x < +1. Such implementations are significantly more precise than single-precision floating-point.

Thankfully this point is clarified in the section "Comparison with floating-point," but the presence of this statement weakens the article.

Finally one potential advantage of fixed-point arithmetic can be that it requires less power for the same or similar computations. This is related to less hardware, but is not explicitly stated that I can find.

Yates (talk) 23:40, 5 July 2021 (UTC)[reply]

  • @Yates: Thanks, will fix.
    Since you seem familiar with the Q notation: do you know how widely used it is? Is the AMD variant the most used of the two? Is it capital or lower-case q? Doyou have and reference other than TI or AMD? Thanks...
    --Jorge Stolfi (talk) 02:49, 6 July 2021 (UTC)[reply]
Floating point is mostly needed for quantities that have a relative uncertainty. That is, where the uncertainty, more or less, scales with the size of the quantity, and normally also for quantities that exist over very large orders of magnitude. For many quantities, the relative error decreases as the quantity gets larger. Distances to planets might might be a few digits. The atomic spacing in a silicon crystal is known to about 10 digits. Quantities with an absolute uncertainty are best done in fixed point. Two quantities that Knuth believes should always be fixed point are finance and typesetting. Gah4 (talk) 21:44, 6 July 2021 (UTC)[reply]

Needs history section

[edit]

The article needs/deserves a "History" section.
In ancient Mesopotamia, mathematicians used base-60 positional notation for fractions, although it was neither fixed-point nor floating-point, but maybe "mental-point": they would say that 30 times 4 was 2 (rather than (2:0)60 = 120). But that skill apparently was lost; Classical Greek and Romans used decimal power systems for integers, but mixed bases for fractions, like 1/12 etc.
I suppose that power-of-ten fractions were reintroduced in Europe only when the Indian decimal positional system was learned through the Caliphate.
Percentages are an old example of fixed-point arthmetic, including the need for rescaling after multiplcation (20% times 30% is 6%, not 600%).
Babbage's Analytical Engine was meant to compute math function tables, surely using fixed-point arithmetic. Maybe Countess Ada used it in her programs. Did she use the Ada Language notation? 8-)
For several centuries "computers" were people who did computatons by hand as a job. Much of that work must have been done with fixed precision specified by the employer.
One should review the ENIAC and other early computers for use of fixed-point arithmetic.
Fixed point was the norm in Grace Hopper's COBOL.
Etc.
--Jorge Stolfi (talk) 03:18, 6 July 2021 (UTC)[reply]

As well as I know it, and maybe not so well, rational numbers (fractions) were used long before decimal numbers with digits after the decimal point. There are stories about Archimedes computing more accurate approximations to pi, but I don't know if as rational values, or decimal fractions. Also, it was the Difference engine that was to compute tables. It seems to me that floating point might have come along with the slide rule, which doesn't keep track of powers of 10. Until there was floating point, there was no need to disambiguate fixed point. Gah4 (talk) 21:34, 6 July 2021 (UTC)[reply]

The wide availability of fast floating-point processors, with strictly standardized behavior, have greatly reduced the demand for binary fixed point support.

[edit]

There is a {{cn}} for: The wide availability of fast floating-point processors, with strictly standardized behavior, have greatly reduced the demand for binary fixed point support. I suspect the bigger reason is that not so many people know about it. According to Knuth two things that should always be done in fixed point are finance and typesetting. (He wrote that while working on TeX.) A large number of algorithms, mostly not those related to physical sciences, work best in fixed point. That is where the rounding, when needed, can be depended on to go the right way. It is also common in digital signal processing. The article also suggests that it is easy to implement using integer arithmetic on systems that don't supply scaled fixed point. That would be true if the system supply multiply with double length product, and divide with double length dividend, as pretty much all hardware includes. But they don't, and it isn't. Gah4 (talk) 00:02, 21 October 2021 (UTC)[reply]

Knuth has suggested that only fixed point arithmetic be used for typesetting and finance. I believe that some finance calculations, such as interest, are required to be done in fixed point, with specific rounding rules. Probably should fine the WP:RS for those. And TeX uses only fixed point for calculations affecting the typeset output. Floating point is used for some error messages. Gah4 (talk) 07:46, 18 March 2024 (UTC)[reply]

Bad source for C language support?

[edit]

The page linked as the source regarding an apparent proposal to add fixed point to the C language doesn't seem to actually back up the claim. The page doesn't even include the word "fixed" and it doesn't seem like any of the linked papers mentioned it either. There is mention of decimal floating point formats but that's entirely different Ifier.B (talk) 22:27, 10 May 2024 (UTC)[reply]

Without actually reading it, it doesn't seem so far off. Most people are used to scaled fixed decimal, but not so many to scaled fixed binary, or other radices. PL/I supports both binary and decimal, with the radix point moved up to 127 digits either way. In common use, scaled fixed decimal is so much more common. Note the addition, not so recent by now, of decimal floating point to the IEEE 754 standard. IBM might be the only one to sell hardware that supports it, though. And IBM S/360 has always supported BCD arithmetic. Not that I ever wrote any COBOL programs, but I believe it supports scaled fixed decimal, as commonly needed for financial calculations. Gah4 (talk) 11:38, 11 May 2024 (UTC)[reply]

Detailed examples

[edit]

I pity the poor soul who is hoping to learn something useful from the Detailed examples section.  To be blunt, it is a mess, both in terms of composition and information presented.  In fact, much of this article is poorly-written and in several places, clearly shows signs of opinion and original research.

216.152.18.132 (talk) 05:31, 23 May 2024 (UTC)[reply]