Jump to content

Talk:List of AMD graphics processing units/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3

Embedded not here, should be here?

See

https://wiki.riteme.site/wiki/Advanced_Micro_Devices#Embedded_graphics

Linking here ?

In German

Embedded-Grafikprozessoren (German)

AMD bietet auch Grafikprozessoren für das Embedded-Segment an. Für diese Modelle garantiert AMD eine Langzeitverfügbarkeit von fünf Jahren.[1] E9000-Module basierend auf Polaris wurden nun angekündigt als Nachfolger der E8000-Reihe [2].

Modell Released Shader-Prozessoren Gleitkomma-Leistung (Peak) Single Precision Speicher Speicher-Bandbreite Speicher-Frequenz OpenGL Version OpenCL Version DirectX Version Vulkan UVD Leistung Anschluss
E9550 (Polaris, GCN 4) [3] 2016-09-27 2304 (36 CU) 5834 GFLOPS 8 GB GDDR5 256 Bit 2000 MHz 4.5 2.0 12 1.0 6.3 95 Watt MXM-B
E9260 (Polaris, GCN 4) [4] 2016-09-27 896 (14 CU) 2150 GFLOPS 4 GB GDDR5 128 Bit 1750 MHz 4.5 2.0 12 1.0 6.3 50 W PCIe 3.0, MXM-A
E8950 (GCN 3) [5] 2015-09-29 2048 (32 CU) 3010 GFLOPS 8 GB GDDR5 128 Bit 1500 MHz 4.5 2.0 12 1.0 4.2 95 W MXM-B
E8870 (GCN 2) [6] 2015-09-29 768 (12 CU) 1536 GFLOPS 4 GB GDDR5 128 Bit 1500 MHz 4.5 2.0 12 1.0 4.2 75 W PCIe 3.0, MXM-B
E8860 (GCN 1) [7], [8], [9] 2014-01-25 640 (10 CU) 800 GFLOPS 2 GB GDDR5 128 Bit 1125 MHz 4.5 1.2 12.0 1.0 3.1 37 W PCIe 3.0, MXM-B
E6760 (Turks) [10], [11] 2011-05-02 480 (6 CU) 576 GFLOPS 1 GB GDDR5 128 Bit 800 MHz 4.3 1.2 11 N/A 3.0 35 W PCIe 2.1, MXM-A, MCM
E6465 (Caicos) [12], [13] 2015-09-29 160 (2 CU) 192 GFLOPS 2 GB GDDR5 64 Bit 800 MHz 4.5 1.2 11.1 N/A 3.0 < 20 W PCIe 2.1, MXM-A, MCM
E6460 (Caicos) [14][15] 2011-04-07 160 (2 CU) 192 GFLOPS 512 MB GDDR5 64 Bit 800 MHz 4.5 1.2 11.1 N/A 3.0 16 W PCIe 2.1, MXM-A, MCM
E4690 (RV730) [16] 2009-06-01 320 (4 CU) 388 GFLOPS 512 MB GDDR3 128 Bit 700 MHz 3.3 1.0 10.1 N/A 2.2 30 W MXM-II
E2400 (RV610) [17] 2006-07-28 40 (2 CU) 48 GFlops 128 MB GDDR3 64 Bit 700 MHz 3.3 ATI Stream 10.0 N/A 1.0 25 W MXM-II
  1. ^ "AMD Embedded-Grafikprozessoren" (in German). AMD. Retrieved 2011-09-15. {{cite web}}: Cite has empty unknown parameters: |trans_title=, |day=, |month=, and |deadurl= (help)
  2. ^ http://www.heise.de/newsticker/meldung/AMD-Embedded-Radeon-E9000-Grafikchips-fuer-4K-Spielhoellen-und-VR-Mediziner-3334539.html
  3. ^ https://www.techpowerup.com/gpudb/2882/radeon-e9550-mxm
  4. ^ https://www.techpowerup.com/gpudb/2883/radeon-e9260-mxm
  5. ^ https://www.techpowerup.com/gpudb/2765/radeon-e8950
  6. ^ https://www.techpowerup.com/gpudb/2767/radeon-e8870
  7. ^ https://www.amd.com/Documents/AMD_Embedded_Radeon_E8860_ProductBrief.pdf
  8. ^ http://www.heise.de/newsticker/meldung/Grafikeinheit-fuer-Spielautomaten-Radeon-E8860-mit-GCN-Architektur-2126747.html
  9. ^ https://www.techpowerup.com/gpudb/2550/radeon-e8860
  10. ^ https://www.amd.com/Documents/AMD-Radeon-E6760-Discrete-GPU-product-brief.pdf
  11. ^ https://www.techpowerup.com/gpudb/1736/radeon-e6760
  12. ^ http://www.amd.com/en-us/press-releases/Pages/amd-graphics-lineup-2015sep29.aspx
  13. ^ https://www.techpowerup.com/gpudb/2766/radeon-e6465
  14. ^ https://www.amd.com/Documents/power-efficient-gpu-product-brief.pdf
  15. ^ https://www.techpowerup.com/gpudb/1738/radeon-e6460
  16. ^ https://www.techpowerup.com/gpudb/1777/radeon-e4690
  17. ^ https://www.techpowerup.com/gpudb/1739/radeon-e2400

Radeon 9500 non-pro

Radeon 9500 non-pro in its early life had 256 bit memory bus as it has been built on the same PCB that radeon 9700 was build. It has been posible to enable the extended memory bus with some moding and driver editing.

VPUs

For consistency, I've listed any GPU having FF T&L unit as 0.5 vpu, regardless of the number of actual T&L units.

Complete R100 section

Could someone fill in the information for the remaining R100 skus: Radeon DDR, SDR 7200 Downgraded to OpenGL 1.3 due to lack of programmability extensions

Xpress 1100 and 1150 VPU

Is there a source that says that the 1100 and 1150 igps have dedicated vertex units? or is it host based as the 200m series? Everything I've read seems to suggest it's basically a smaller process of the RS482 with support for AM2

ATI Radeon Mobility 9200 missing?

It seems that at least one model is missing here - mobility 9200. http://ati.amd.com/products/MobilityRadeon9200/index.html

Style

According to the Wikipedia manual of style, "Graphics Processing Units" in the title should be all lowercase.

Core Config

Because several ATI graphics units do not have any hardware vertex processing capability at all, there needs to be a consistent method of differentiating on the table cards with software vertex capabilities (IGPs), fixed function vertex units (R100) and no vertex processing capabilities (Rage).

4650/4670 TWP W update

Reference 21 states (from the AMD/ATi presentation slides), the maximum board power usage to be 48W/59W, respectively.

DirextX Support

Why my ATI 9000 64mb supports directx 9.0, while in the article it isn't so? (unsigned comment by User:87.0.236.9)

It supports DirectX 9.0 compatibility, but its feature-set matches only those from Direct3D 8.1. Read the "DirectX version note" section --200.148.44.7 01:56, 22 August 2006 (UTC)

TMUs

Every single card on the page is listed as only having one TMU but many if not all of the cards have more than one TMU. Somebody needs to fix this. Some guy 08:00, 16 October 2006 (UTC)

  • Indeed, for some reason, the table lists that one entry according to the old "pipeline" idea, where it first lists the number of "rendering pipelines," then the number of TMUs per pipeline, and then the number of Vertex Shaders/T&L units. This has really been an outdated layout of things, since "pipelines" are no longer followed. In this case, it results in some ackward usage of things like pipelines: 16(48) to try to get across that a card has 16 ROPs, 16 TMUs, and 48 PSs. Perhaps it would be recommended for someone to fix it; I may take care of it if I get the time. It really should have multiple entries to reflect each type of unit. Nottheking 23:02, 6 December 2006 (UTC)

Fab process

Fabrication Process - Average feature size of components of the processor.

I thought it was minimum feature size. Tempshill 20:15, 6 December 2006 (UTC)

  • No, it's not. The actual transistors are smaller than the fabrication process listed, and some of the interconnects are even thinner still. Marking it by the minimum feature size would be too confusing, and hence "average" has been what it's always been defined as, in any sort of publication. Nottheking 00:00, 7 December 2006 (UTC)
    • Affirm Nottheking. Average feature size is used because some features are much larger than the process used, others are much smaller. Minimum feature size would cover < 1% of all features on the process, average accounts for a much larger number of the features, in addition to being a number reflective of the size of all features, not just the ones at size X. Sahrin 01:30, 29 December 2006 (UTC)

Discussion moved from Top of Page

ATi is the correct capitalisation, and not ATI. Should this be changed?

  • It would appear that both capitalisations are actually correct, or at least, that the all-capitals form is correct; while the ATi logo suggests that the last character is in lower-case format, all print/media writings of the name I've found capitalized all of the letters; thus, it could make sense to leave the name the same. Nottheking 23:29, 6 December 2006 (UTC)

Could there be some other or more specific details of GPU processing elements than those pipes, TMU's and VPU's? Melter

  • The four primary processing elements of a GPU are the ROPs (often now simply called "pipes,") TMUs, pixel shaders, and VPUs. There is also the memory controller(s), which are mentioned separately. Aside from that, there is little more on the GPU proper other than, perhaps, cache, but no readily availible documentation covers any other part of a GPU's structure other than that. However, the tables could perhaps use a form of listing components that provides more clarity, especially with the R500 and later designs, where the old design of a "pixel pipeline" was completely eliminated in favor of a less rigid, multi-threaded system. Nottheking 23:29, 6 December 2006 (UTC)

I can't believe the table doesn't show Shader Model support of individual chips... Terrible, that definitely needs to be added. -- xompanthy 01:32, 12 March 2006 (UTC)
You should be able to tell what Shader Model a graphics card supports by looking at the Direct X version of the product.

  • As mentioned above, SM is more or less the same as DX version; the only exceptions are for various DX 8.0 versions, which don't apply to Radoen cards anyway. DX 8.1 is SM 1.4, DX 9.0 is SM 2.0, DX 9.0b is SM 2.0x, DX 9.0c is SM 3.0, and DX 10 is SM 4.0. Nottheking 23:29, 6 December 2006 (UTC)

Someone had written that the X1600 Pro/XT only had 4 pixel pipelines, and 12 shader units. This is incorrect. The card has 12 pixel pipelines and 12 shader units. I have changed the information for these cards accordingly.

  • Actually, neither are correct; the RV530, like all R500-series parts, has ZERO pixel pipelines; rather, it instead uses a threaded task system, using an arbiter processor on the GPU to distribute the workload across individual units, rather than relying on a pipeline structure that tied a pixel shader and one or more TMUs to a single ROP. Hence, I've edited the whole collumn to reflect this. Nottheking 23:29, 6 December 2006 (UTC)
    • Good catch nottheking. This was a mistake on my part when I re-aligned the columns for the R500/R600/Mobility Radeon parts. Thank you for correcting my error. Sahrin 01:35, 29 December 2006 (UTC)

There is some missing info. The RV351 was an improved RV350 with lower power consumption, less heat (and a die shrink?). It appears on the Mac as the 9650 (sometimes 9650 XT) and has 256Mb of RAM. There are also PC 9600s using the RV351. And there is also a card called the ATi Rage 128 Ultra, which comes in 16MiB and 32MiB versions, and as low profile. It appears to be a Rage 128 Pro with faster clocks... Anonymous Coward 04:04 15 June 2006 (UTC)

Thank you to everyone who contributed to this page, it's very useful to me! Chris D'Amato 20:22, 29 June 2006 (UTC)

Bandwidth is calculated incorrectly. I've changed it to use GB/s, where GB/s=10^9 bytes/second. To properly calculate bandwidth in GiB/s it's (bus width * effective clock of memory) / 1073741824 (bytes/GiB) / 8 (bits / byte)

Continous lists, divided in AGP and PCI-e categories, are becoming obscenely long. Should this article be modified as to be categorized by core families instead of native buses, as is Comparison_of_NVIDIA_Graphics_Processing_Units?

Just as a note, my most recent edit is implied by other comments I posted here; in a moment of absent-mindedness, I neglected to write a comment describing my edit to the R500 table, which amount to a re-writing of what was previously the "pipe x TMU x VPU" collumn. It is now the "ROP x TMU x PSU x VPU" collumn; it PROBABLY shouldn't use the letter "x" to separate each number, but that seemed consistent with the style there. Likewise, unlike the other three processing elements of a GPU, there is no article yet for a pixel-shader processing unit as of yet. I've chosen the acronym "PSU," which might be a bit confusing, so I'll leave it to others to decide if that's one to use. Nottheking 23:29, 6 December 2006 (UTC)

Missing cards and parenthetical notation in fillrate

It seems like this table is missing X1050 and X1550 cards. Or are these a rebranding of other ones?

Also, for some cards a second value is listed in parenthesis in the fillrate column, higher than the other value. (e.g. "2000 (6000)") What does this mean? Any help is appreciated. Sir Fastolfe 02:28, 17 February 2007 (UTC)

X2K series is upside down

All the other series run from low-end -> high-end, while the X2K series runs from high-end -> low-end. Is there any reason for this or should it be fixed? Pik d 23:02, 17 March 2007 (UTC)

Console Graphics Processors

I just noticed that the Xenos(Xbox 360) has a fillrate listed as 16000 compared to 648 for the Flipper(GameCube) and 972 for the Hollywood(Wii). This seems like a huge difference. Can anyone check the numbers on this? Daemonward 13:32, 31 October 2007 (UTC)

  • Looking at the other GPUs, it appears that the fillrate should equal (Core Clock Max) * (Number of Texture Mapping Units). I'll do the calculations and make the appropriate changes. If I'm wrong, please correct me. Daemonward 14:37, 31 October 2007 (UTC)
  • You would be correct, though that would be for the texture fill-rate. For the pixel fill-rate, it would be equal to (Core Clock Max) * (Pixels Per Clock Cycle), though that figure isn't included on the chart. The 648 mTexels/second is the correct figure for the Game Cube's Flipper, though the true number for the Wii's Hollywood is unknown, as both the design of the chip as well as its clock speed are unknown. (the figures used are stand-ins that have no cited source that dates after the console's release) The Xbox 360's Xenos was incorrect, as you noted, as it has 16 TMUs, and runs at 500MHz. Thank you for correcting that. Nottheking (talk) 04:11, 20 November 2007 (UTC)

ATI Radeon X1400

Why is there no Radeon X1400 on the list? Where does it fit in?

I don't think there is a Radeon X1400, other than the Mobility. Decembermouse (talk) 05:15, 10 March 2008 (UTC)

HD 3870 X2

The new HD 3870 X2 are a bit confusing about they PCIe specifications, becouse it uses PCIe 2.0 (MSI R3870X2-T2D1G, MSI R3870X2-T2D1G-OC) for the card interface, but uses PCIe 1.1 (Tom's Hardware "ATI Radeon HD 3870 X2 - Fastest Yet!" (page 4 of 20)) for the on-card Crossfire. - Placi1982 (talk) 12:39, 30 January 2008 (UTC)

I have edited the specs on the 3870X2 to reflect Powercolor's upcoming release containing GDDR4. It should have 2 x 512mb GDDR4. Someone put 2 x 1024mb, if you know something about that please put it here! Decembermouse (talk) 05:19, 10 March 2008 (UTC)

Invalid Citations

You DO NOT cite sources from forum, forums are discussion areas. This is originally References #3

<ref name="Chile17012008">{{es icon}} [http://www.chilehardware.com/foro/ati-radeon-hd3100- t132327.html Chile Hardware thread], retrieved January 17, 2008</ref>
[http://www.chilehardware.com/foro/ati-radeon-hd3100-t132327.html Original Citation in Spanish]

Reference #7 (reason: forum)

<ref name="IT_OCP_17012008">{{zh icon}} {{cite web | url=http://www.itocp.com/thread-1931-1- 1.html | title=[ATI] First look at AMD 3650 and 3690, you'll regret to miss it. | author=OCP- News | date=2008.01.17}}</ref>
[http://www.itocp.com/thread-1931-1-1.html]


Reference #10 (errors corrected: language is english, not chinese, can)

[http://ati.amd.com/products/radeonhd3800/specs.html] (this is not chinese)

--Ramu50 (talk) 19:13, 19 June 2008 (UTC)

Reminder: For referencing citations, try to use the name of the company, or simple format such as Nvidia Geforce 9800X2 - Overiew, not the article's title for better verify and reaiable sources. Unless the webpage is a resource article, tutorital or web design such as Moving Beyond OpenGL 1.1 for Windows --Ramu50 (talk) 03:21, 23 June 2008 (UTC)

Should this citation being removed? random images from unknown sources, the website look like a new reporting company so its better to quote the article not the image. --Ramu50 (talk) 17:21, 10 July 2008 (UTC)

OpenGL 2.1 - supported?

This wiki page says that RV770 supports OpenGL 2.1, but on official AMD page there is only OpenGL 2.0 support mentioned.

[1] —Preceding unsigned comment added by 83.10.216.65 (talk) 21:12, 29 July 2008 (UTC)

Radeon Xpress 1100 IGP

where should this card be reported? it states to have an RS485 chip and its pciid is 1002:5975 subsys 103c:30b0 while it sometimes is called RS482 [Radeon Xpress 200M] it takes 256M memory from the system RAM —Preceding unsigned comment added by 78.53.197.200 (talk) 13:13, 2 September 2008 (UTC)

Page Protection and Other Language Citation

I think this article should totally be protected, each day they are thousand of people editing this page without any references and it is hard to track who is opposing who. I think ALL of non-english citation should have include a Google Translate link) for easier reading.

Don't link it to AltaVista Babelfish, they are very inaccurate.

--Ramu50 (talk) 16:32, 15 July 2008 (UTC)

Resolved by in Requested Pages for Protection (through chatting with admin), but still monitoring the consistency of article actions. --Ramu50 (talk) 01:56, 21 July 2008 (UTC)

Indeed, the edits I'm seeing to the article hardly ever have sources cited. Even if they're the correct figures, it's critical to properly cite them when we're dealing with tables of numbers here, as it helps fight inaccurate numbers due to speculation; for instance, I just corrected a couple lines that improperly listed the specifications of the RV730 GPUs, which coincidentally, had cited nothing; I added two sources for them. Into the future, I think I'll slowly go through this list as well as the nVidia list and add what sources I can... Yes, it's a lot, but the sources need to be there! Nottheking (talk) 05:59, 12 September 2008 (UTC)


Original Research for R700 section

I don't know German, so I tried to Google translate the website cited in the section (here: http://www.hartware.de/news_44085.html) to English (for your convenience: here), it yields the following in the translated text:

We unfortunately have no way to verify this information, but [it is] interesting to read them in any case.

As mentioned, this information is not official and therefore with caution. Up to the expected introduction of HD ATI Radeon 4000 series in June, it is still for a while, so that the details can change anything.

An {{original research}} was put up. Please discuss. --202.40.157.145 (talk) 02:32, 18 February 2008 (UTC)

Well, as we can see now, this information was unfounded and is outdated; it was ancient speculation, possibly old information, as then the word on the Radeon 4800s were that they would have only 480 stream processors, far fewer TMUs, and would have much higher clock rates. As we've seen now, the actual RV770 came with 800 stream processors, 40 TMUs, and more modest clock rates. It's also possible that some of this information has cluttered up and caused rumors on a supposed RV740 or Radeon 4700, which seems to be sporting specs eerily akin to what was previously thought to be RV770. Nottheking (talk) 09:57, 12 September 2008 (UTC)

Power Consumption

Would really like to know how much power each card consumes, and whether or not is available commercially without cooling fan, ie using heatpipe or similar. —Preceding unsigned comment added by 118.90.76.16 (talk) 22:45, 29 June 2008 (UTC)

This could potentially be added, but doing so would add another dimension of complexity. The TDP of each card is considerably more doable in a lot of cases, since in the past few years ATi and nVidia have made a point of publishing official numbers for these. For older cards, they kept them secret, and such numbers when found were produced by independent research with varying accuracy. As far as the availability of finding such cards with passive cooling solutions in lieu of active HSFs, that would be outside the scope of this page, since that is up to independent board partners to select for their own products, and is not something specified by the actual model specifications that ATi sets out for their hardware. It's akin to factory-OC cards, which are also not specified by ATi/nVidia, else they wouldn't be considered OC cards. Simply put, there would be too many variants to possibly cover them all. Nottheking (talk) 10:04, 12 September 2008 (UTC)

AGP Signalling Voltages and Backwards Compatibility

Many of the cards list 'AGP', 'AGP 4x' or 'AGP 8x' as the Bus Interface, but this information says nothing about compatibility with other AGP interfaces. Some AGP 4x/8x cards are backwards compatible with AGP 2x while others are not. Some AGP 2x cards work in AGP 4x slots, other's don't. It all depends on whether the card will accept signalling voltage of the motherboard.

I think it would be useful to specify exactly which cards have versions supporting a specific AGP interface. It is a simple fix: where the Radeon X1050 lists AGP 4x/8x, PCIe x16, change it to show AGP 2x/4x/8x, PCIe x16 if an X1050 accepting 3.3V (AGP 2x) exists (it does). Also, no cards since the R300 series has supported 3.3V to my knowledge, so the changes should be minimal.

Unless anyone objects, I'll begin adding this information in the next few days. Mattst88 (talk) 02:04, 29 November 2008 (UTC)

OpenGL 2.1 version note is misleading

The OpenGL 2.1 version note here http://wiki.riteme.site/wiki/Comparison_of_ATI_Graphics_Processing_Units#OpenGL_version_note states that it supports GLSL and geometry shaders. This is misleading. Geometry shaders are not mentioned in the OpenGL 2.1 specification and are usually supported by vendor specific extensions. I think the WP entry http://wiki.riteme.site/wiki/Opengl#Mt_Evans quite accurately explains when geometry shaders will be officially supported.

Maybe support for geometry shaders should be indicated separately by some different flag. 0meaning (talk) 08:23, 25 February 2008 (UTC)

Fixed long time ago. WheretIB (talk) 23:58, 8 January 2009 (UTC)

ATI Mobility Radeon 9100 IGP - Data Mismatch

Hello, I have an ATI Mobility Radeon 9100 IGP GPU, and Windows reports that the Internal DAC is clocked at 400 MHz. Every other utility I have used to gain information on the chip states that the chip is running at 300 MHz. Which one is incorrect? It doesnt seem like Windows is incorrect, since it is the one with direct access to the hardware, but every other utility says it is running at 300 MHz. What is going on? Presario (talk) 19:34, 1 March 2009 (UTC)

Radeon 4830

The page says that this card has 12 ROPs. According to Anandtech.com (http://www.anandtech.com/video/showdoc.aspx?i=3437&p=3) it has 16 ROPs. Can someone please change it. —Preceding unsigned comment added by 128.211.251.118 (talk) 04:08, 24 October 2008 (UTC)

The page also says that the card has 640 SPs (160x4). I thought it was 128x5. It's still 640 SPs, but it's a big difference. Dont all R700 processors come in clusters of 5, with 4 being simple ALUs and the fifth being a complex ALU? —Preceding unsigned comment added by 128.210.132.177 (talk) 16:46, 27 March 2009 (UTC)

PowerColor HD 4730

Hello, I picked this product from Xbit Labs

http://www.xbitlabs.com/news/video/display/20090529111225_PowerColor_Officially_Launches_Radeon_HD_4730_Graphics_Card.html

And it shows that PowerColor has released a new HD 4730. I was wondering if you could add that to the list?

--124.188.26.68 (talk) 12:07, 30 May 2009 (UTC)

R800 / 5xxx series

This section should be removed as it has no sources and everything I can find points to the fact that R800 is not the correct codename ("Evergreen" is according to http://www.anandtech.com/video/showdoc.aspx?i=3573 which isn't a rumour site.) The 5xxx codename is original research, inferred only from 3xxx and 4xxx, because I can find no reliable references to it on the internet. And the product names and specs are just plain made up. —Preceding unsigned comment added by 86.163.186.102 (talk) 20:19, 28 June 2009 (UTC)

That's why it has the original search tag... this section will be kept until sources will be provided. Further more, don't generalize sites! Sure, sometimes they spin rumors but this time they got bits and pieces from official sources. This might be a rumor: Trillian. Regarding the Evergreen bit, it remains to be see but I hardly doubt that AMD will go for a numberless code name for PC parts.
Em27 (talk) 21:44, 12 August 2009 (UTC)

Keeping this section as the cards are now official.

What's with the "Cypress XTX" being included? I follow hardware news pretty religiously and a quick google search only results in this wiki page and some rumour news articles without any hardware specs (WP:V). I'm deleting the entries until more evidence surfaces--81.243.7.205 (talk) 15:51, 30 September 2009 (UTC)

I also follow hardware news and Cyrpress XTX (aka radeon 5890) it's just a rumor, not confirmed in any way. I have deleted it and added Radeon 5870 Six, the 6 monitors eyefinity capable card (confirmed and photographied) —Preceding unsigned comment added by 193.153.169.227 (talk) 17:54, 8 October 2009 (UTC)

Mobility x1100 and mobility HD 5xxx

Can anyone please add them to the list? 83.108.203.102 (talk) 18:47, 12 October 2009 (UTC)

TDP notice

Actual TDP may be differing from other board vendors. The TDP listed is not TDP, but rather board power consumption, so another footnote should be placed. —Preceding unsigned comment added by 216.93.208.25 (talk) 23:15, 19 October 2009 (UTC)

Radeon Mobility 4650 incorrect memory amount

I could just change it to reflect the correct data, but then someone would just change it back so I'll reference it with "I have a DV7-2185dx laptop from HP. It includes the the 1Gb Radeon Mobility Card." This can also be selected as an option in the HP shopping page, so they reference it too. Thx. —Preceding unsigned comment added by 76.25.63.200 (talk) 03:21, 5 November 2009 (UTC)

r600+ "Config core"

Some numbers here are strange. I mean those AAxBB. What do AA and BB mean respectively? If BB is the number of units in each shader cluster then it must be __always__ 5 (and is probably wrong for r700+ specs here). If it's number of SIMD cores it may vary (and is wrong for r600 specs). —Preceding unsigned comment added by 83.10.211.211 (talk) 17:14, 20 December 2009 (UTC)

chart

Which idiot is responsible for changing the chart? This comparison ought to make it easy to compare? uh, then let's keep the same chart for all GPUs from 2xxx? —Preceding unsigned comment added by 84.56.174.63 (talk) 14:26, 30 January 2010 (UTC)

Wrong Mobility Radeon HD 2600 (M76M) Memory clock max

CCC and gpu-z says that max memory clock max is 600 Mhz and not 400. —Preceding unsigned comment added by 78.94.205.162 (talk) 19:09, 4 February 2010 (UTC)

Repetitious annotation makes table very hard to read

Every card in the Evergreen (HD 5xxx) series table provides angle independent anisotropic filtering. This text in the final column often wraps to make every row of the table six lines deep, until you make the fonts so small the table is hard to read for another reason. To read the table efficiently, I ended up stretching it all the way across my dual-head desktop, so that what I needed to see was easy to read on the left LCD panel (under Firefox). — MaxEnt 12:05, 30 March 2010 (UTC)

Video Acceleration

I ended up on this article while trying to determine if my ATI card supported H.264 hardware acceleration. Unfortunately there does not seem to be any info on this types of feature support in the article. Is there another article that does have it? If not, it would be pretty useful if it were added. —Preceding unsigned comment added by Synetech (talkcontribs) 22:02, 22 March 2010 (UTC)


Note, the RV710 (HD4350, HD4550) has UVD 2.2, so I have appended that information to the appropriate list (not the chart, however). Alfredcisp (talk) 18:05, 1 June 2010 (UTC)Alfredcisp

HD5000

Someone made an incorrect edit to the Eyefinity edition capabilities.

They actually have all the capabilities of "normal" non eyefinity edition cards, but with support for more outputs. Alfredcisp (talk) 20:47, 3 June 2010 (UTC)Alfredcisp

DDR3 vs. GDDR3

After having to take another look over several articles regarding the subject, it's clear that it's important to stress a point here: GDDR3 and DDR3 are not the same. Be careful which you specify. In a lot of cases, it appears that through causes, however they be, (typos, ignorance, or laziness) some editors have, in other articles, introduced the incorrect term. I went through and checked this entire article, and was able to verify each of the cases where a card specified DDR3 rather than GDDR3. (the same was not true for the counterpart NVIDIA article; I had to edit a few card listings and add citations to correct it)

I'd highly recommend that aside from paying special care not to confuse various forms of DDR vs. GDDR, that likewise, in cases where a "less-than-typical" type of memory is specified (the "typical" being DDR, DDR2, GDDR3, GDDR4, and GDDR5) that editors cite a source that explicitly names the memory type used. I can foresee this otherwise being a rather contentious editing issue, prone to edit/revert battles and swarms of [citation needed] tags. Nottheking (talk) 11:36, 26 May 2010 (UTC)

It should be noted, the primary difference between GDDR3 and DDR3 is specified power consumption levels (and TDP relating to the power consumption). —Preceding unsigned comment added by Alfredcisp (talkcontribs) 17:41, 1 June 2010 (UTC)
Actually, that's merely the most VISIBLE benefit for these cases. The primary difference is that GDDR3 is, contrary to what the name would imply, actually a derivative of DDR2. GDDR4 is also a derivative of DDR2, while GDDR5 is the first GDDR to be based upon DDR3. One of the chief functional differences is that DDR3 (and in turn GDDR5) have an external data clock rate doubled over their interal timings that of DDR2, effectively producing what could arguably be termed a "quad-data-rate" (or "quad-pumped") interface. Though of course, the use of the term "DDR" indicates that it isn't a true QDR interface; there's four transfers per command signal clock, but still the same two per data clock; the 'QDR' description is merely a simplifcation to help understand how DDR3/GDDR5 work. Nottheking (talk) 10:50, 3 July 2010 (UTC)

HD3410 correction

HD3410 had upto 512mb dedicated GPU memory, as configured in the now-discontinued HP DV2 laptop series. —Preceding unsigned comment added by 124.42.77.160 (talk) 09:46, 2 August 2010 (UTC)

Pricing/Naming of HD 6000

While HD 6000 will be an architectural change, however some of the line will continue the exist HD 5000 line and rename under the HD 6000 brand as well. it was confirmed [citation needed] the HD 6600 will be rebrand from Jupiter to maintain the mainstream market. for the pricing issue the HD 6770/6750 will price between 5850/5830 and eventually replace them in the end of year while HD6800/6900 will focus on high end/professional market—Preceding unsigned comment added by 75.63.48.111 (talk) 20:16, 13 September 2010 (UTC)

Encyclopedic content must be verifiable, and given AMD's stance on NVida's rebranding, and the limited rebranding (that clearly seperates generations) AMD has done in the mobile gpu and chipset gpu space, I call bunk on the Radeon 6600 being a rebranded 5770. 164.106.139.159 (talk) 19:07, 14 September 2010 (UTC)

As the editor above [164.106.139.159 (talk) 19:07, 14 September 2010] pointed out, the content here must be verifiable. To my knowledge, AMD has not even made an official announcement about the 6xxx series. The only "support" or "verification" we have are spreadsheets with information about two cards, the upcoming Radeon HD 6750 and 6770 cards, supposedly produced by AMD, and yet somehow we have an entire table of information about the entire 6xxx line, including precise technical specifications, release dates, and prices, with absolutely no mention that some (or most, as the case may be) of the information is speculative. Until AMD has released official information on this line, anything that can't be supported by at least an AMD-branded spreadsheet/chart should be removed, or at the very least replaced with "TBD" or "TBA". It seems bogus to represent this information as "fact," when in reality, it is largely supported by "nothing." TJShultz (talk) 16:13, 28 September 2010 (UTC)

Caymen is not going to be 48 rops while only 256bit bus. any fanboi frame will result block from this section. —Preceding unsigned comment added by 70.131.62.13 (talk) 04:52, 4 October 2010 (UTC)

User red dog and others keep framing/vandalise this page I request this page need to set as semi protect and all setup changes has to go through community before change values. —Preceding unsigned comment added by 70.131.63.167 (talk) 22:58, 4 October 2010 (UTC)

HD6800 series support OpenGL 4.1

I saw the sepcification on AMD website that the series have support for OpenGL 4.1 not 4.0 as in the table [1]

you are correct. thanks for help. I think it is the matter of driver support.--Prandr (talk) 10:55, 6 November 2010 (UTC)

Release Price

There is currently a "release price" column on the Nothern Islands(6xxx series) chart. I have a number of reasons for which I feel it should be removed. Firstly, and most importantly, it isn't even a specification of the cards. It's irrelevant in this article. It gives only an indication of the launch price, which could change a week or so after release. Cards can go through different market segments/prices throughout their lifetime, and I don't think the release price is very useful. The charts are big enough as they are. I think we need to make decisions about which columns are actually important to the article, and make it as effective as possible for readers. Remember, these charts are supposed to be for specifications of the cards. Paranoidmage (talkcontribs) 13:58, 4 November 2010 (UTC)

I agree with removing them. I think there's some minor historical value in knowing the release price, but it is outweighed by the benefit of making the tables smaller and easier to read. dolphinling (talk) 18:21, 4 November 2010 (UTC)
I disagree. The release price is important to show what is the target segment of the card. Of course over time this price will be changed and that's why we don't list "current price", but "release price". Alinor (talk) 09:12, 6 November 2010 (UTC)
I disagree. Released price is as important as everything else and it is included in the official release specification.
If the problem is in table size, there are other ways to make it more compact without deleting info. E.g. making single column for all 3 APIs or even excluding this information to the header if all GPUs in the family have the same API versions (e.g Evergreen, R700 series) Poimal (talk) 05:46, 7 November 2010 (UTC)

Cleanup of unsourced information in Northern Islands (HD 6xxx) series

This page was recently semi-protected (see the request) so I decided to continue the work by Prander to remove unverifiable information.

I've added citations where I could, and removed other information. Some notes:

  • For the released GPUs 6870 and 6850, I generally did not remove information, even if I didn't source it. The notable exception is that I removed the listing of them having 2GB of memory: I could find no reference to this, and newegg sells no cards with it.
  • I removed the GPU with the codename Turks. I could find no source for even a model name for this, let alone specifications. (Also, the codename information from the driver release points to two separate Turks, XT and Pro.)
  • I removed most of the information about the 6350, except what I could source.
  • I left in a few things that are obviously consistent across all models, such as fab process and bus interface.

I hope that having citation markers all over the place will convince other editors to cite their stuff too, instead of edit warring with unconfirmed numbers. dolphinling (talk) 08:44, 4 November 2010 (UTC)

My citations on GPUs that have been released were removed. Now that I see the official specs on AMD's website, I realize the reference I chose wasn't the best. However, I think it would have been better to change the citation rather than remove it. More importantly, though, I think we need to decide how citations for released cards should work. I propose the following:
  • The card model should be cited to the official specification page on the AMD website
  • Information that is not in the official specifications (Release date, Code name, Fabrication process, Transistors, Die size, TDP, maybe more?) should be cited elsewhere.
  • Information that is in the official specifications should not have a citation marker, the one next to the model is enough
  • In the case of a section undergoing a lot of speculative changes (e.g. the curent 6xxx section), information in the specifications should have citation markers, to make it clear which information is verified, and to encourage people adding new information to add citations. These markers can be deleted when the section is no longer the subject of lots of speculation.
I'm going to make the 6xxx section conform to this as best I can now, until I get some comments on this proposed style.
dolphinling (talk) 19:01, 4 November 2010 (UTC)
I removed those citations because the cards were already released. I was just following what is done with the other charts, where official specs or easily found information isn't cited in the the charts. I didn't think they were all required for released cards that are well documented online. The way it is now with a citation in every cell is too overwhelming to look at, especially when that information is credible and doesn't need the citations. However, for speculation of future cards or and current specs that are more debatable will definitely need sources. Paranoidmage (talk) 19:41, 4 November 2010 (UTC)
And even then it won't be enough. The sources themselves should be looked with great skepticism: AMD is determined to keep info secret until the very release, and has quite succeeded in doing so. Even serious tech website cant help publishing rumours to fill this information void. And our responsibility is not to help them to spread. Basically AMD is the only reliable source.--Prandr (talk) 00:33, 5 November 2010 (UTC)
This is generally true, but keep in mind that sometimes the websites of AMD/Nvidia/Intel may contain some vague or simply wrong information (because of neglect, copy-paste from previous materials, etc.) - and for many of the specifications they simply don't put it on their sites ("Superb performance" is a more catchy term than "16 execution units running at 300MHz"?). Alinor (talk) 09:16, 6 November 2010 (UTC)

I want to remind you to be very careful with your sources. some good looking "slides" were fake. http://www.3dcenter.org/blog/leonidas/die-leichtglaeubigkeit-gegenueber-praesentationsfolien an they were copied by most serous tech sites. that what I meant above--Prandr (talk) 18:13, 25 November 2010 (UTC)

Shader count Tidbit

This only applies to R600 and derivative architectures (r600, rv770, Evergreen), and includes all the cards within each generation.

It should be noted, the current way on Wikipeida of counting shaders, is:

(example):

HD4830 (128 * 5)

128 denotes the number of shaders on the SKU (since the actual die inclues 160 shaders).

Each shader is comprised of 5 ALU, which in current R600 derivative architectures (upto [confirmed] Evergreen - third generation), have 4 simple units capable of MADD/MUL and one ALU capable of transcendantal math calculations. All five are tied to each other, so 5 independant instructions can be executed per instruction clock, but only one dependant instruction may be executed per instruction clock.

However, each shader in every 80+ ALU SKU is organised in one of several "SIMD" (Single Input, Multiple Data) of 80 ALU, or 16 Shaders.


Simply put, shaders in r600 (and derivative) are comprised of 5 ALU, each. An independant ALU does not equal a single shader, as is the case in nVidia G80 (and G80 derivative) architecture.

Alfredcisp (talk) 18:01, 1 June 2010 (UTC)AlfredCisp

Well, it's not QUITE cut-and-dry as you make it sound... This is due to the remarkable number of differences that AMD and nVidia have taken in designing their unified shader technology. nVidia's approach takes the more conventional method that had been used in earlier pipelined GPUs; each "stream processor" contained a single 4x32-bit (128-bit) SIMD unit, and could handle a single instruction per clock cycle, for up to 8 operations (4 multiply+add) per clock cycle.
AMD's stream processors take a different approach, as described in the article for Radeon R600. In there, each "stream processor" on its own has, just like nVidia's stream processors, a functional unit, though rather than being a SIMD unit, it exists as a scalar FPU; to handle SIMD instructions, that has to be done at the cluster level, which will break a vector into its components and hand off each, along with the instruction for it, to a single stream processor. However, each stream processor can individually handle a single instruction per clock cycle.
So that means that technically, yes, the top-end GPUs have the number of stream processors claimed (320, 800, and 1600 for R600/RV670, RV770, and Cypress, respectively) and not 1/5th of that, because architecturally, those are BOTH the number of functional units on hand, as well as the maximum number of instructions per clock cycle it can handle; these are both analogous to how they are counted on nVidia GPUs. So, in short, while it would appear that, compared to the "old" (SIMD unit) design, many would think that AMD simply split each stream processor into smaller sub-components, instead the design is the opposite; AMD modified the functional unit to change it from SIMD to a scalar FPU, THEN clustered them together to compensate for the weakness this caused. So while it'd be correct to label Cypress as "5x320 stream processors," labeling it as having only 320 shader units would be incorret; it has 1,600 stream processors, they simply aren't as individually capable as those on nVidia's contemporary GPUs. IF you were to label Cypress as having 320 shaders, then you'd have to count on the analog for nVidia, and count te number of "streaming multiprocessors," which are likewise loosely-structured clusters, in that case containing 32 stream processors per. (in other words, one would have to list the GTX 480 as having only 15 'shaders') Nottheking (talk) 11:20, 3 July 2010 (UTC)
I think we should display only the total number of stream processors in the tables here. It seems silly to show "1600(320*5)". There isn't any point in showing that the stream processors are in groups of five in this table. That information can be learned through further reading on the architecture, but is unnecessary here. There are other divisions other than these groups of 5 stream processors, such as the clusters withing the die. We could very well display that, showing: "1600(20*16*5)". This would more accurately represent the layout of these chips. Again, this informaiton is wholely unnecessary here and should be removed. I propose we only display total stream processor count in the following fasion: "1600". Paranoidmage (talk) 23:31, 7 July 2010 (UTC)
If only it were that clear-cut... Because one could just as well make a case for the count be displayed in the fashion of simply "320," only counting the number of clusters, rather than the number of individual processing elements. Obviously, AMD pushes the 1600 figure because it looks more impressive, though each one of those 1,600 is nowhere near the capabilities of each one of of nVidia's (or ATi's earlier) shader units. The most obvious difference is that each of the 1,600 ALUs is more akin to an FPU, while a shader unit on an nVidia G80/90/GT200/GF100 is a SIMD unit. In essence, the multi-number approach you assert as not having "any point" exists in the interests of attempting a neutral point of view, and avoid being misleading; it's sort of a compromise. Nottheking (talk) 04:17, 31 August 2010 (UTC)

um, have you all read any of the various explanations of these architectures that are part of coverage of new families of gpus on major tech sites? ever since g80, nvidia GPUs have been completely scalar and amd as vect4 plus one. every amd shader can perform both a single precision multiply and add per clock and nvidia dx10 chips one multiply add and supposedly but not usually another multiply per shader as well. with dx11 both companies switched from madd to fma(fused multiply add) and nvidia dropped the extraneous mul. up above someone flipped how nvidia and amd organize shaders. no dx10 or later nvidia chips have simd shaders. the reason nvidia both performs better on basis of theoretical gflops versus amd and takes up so many more transistors and more die size for the different mumber of shaders is due to several things. the efficiency afforded by the scalar granularity versus amd's simd architecture is a boon for almost double clock speeds at the cost of additional separate pathways on the chip to accommodate individual scalar work. in addition, the extra render back ends(rops) and memory width make for a more balanced chip performance wise but again at the cost of more transistors.

for a while until recently amd has had an abundance of shader power with a relatively lower performance in other areas of their chips and for the most part nvidia has used fewer but much higher clocked shaders along with more other functional units(rops, wider memory interface width) that help balance the chip and boost performance at the cost of die size and power consumption. more recent developments in the form of gf104 for nvidia and barts for amd have seem a rebalancing of chip resources to produce smaller, cheaper, less power hungry chips that perform almost as well as their older more expensive siblings on the same 40nm process. as I write this at approx. 1am central US time by all rumors we expect to see modifications to amds vliw5(very large/ long instruction word) to vliw4 which hopefully will see performance gains of 20 to 25 percent per shader on average in the next day or so. we will look to reviews to tell whether or not this will be a boon for amd's new cayman chip to compete well with gtx580 and gtx570. hope this helpsJtenorj (talk) 07:07, 14 December 2010 (UTC)

Edit request from Jtenorj, 14 December 2010 hd6xxx igp theretical gflops for 280 mhz clocked part

{{edit semi-protected}} I believe that the theoretical gflops for the hd6xxx igp with the 280mhz clockspeed is incorrect. I'm guessing that the speed is well documented other places online and the figure for the texture fill and pixel fill look correct but the math for the gflops looks off. 280mhz times 80 shader processors times 2 single precision flops per shader per clock fma(fused multiply add) should come to 44.8 gflops. please correct at your earliest convenience.Jtenorj (talk) 08:02, 14 December 2010 (UTC) Jtenorj (talk) 08:02, 14 December 2010 (UTC)

Not done: please provide reliable sources that support the change you want to be made. Would be willing to as soon as I get a source. -- DQ (t) (e) 22:53, 17 December 2010 (UTC)

Edit request from 173.22.91.148, 13 December 2010

{{edit semi-protected}} under the section of the page regarding the hd 6000 gpu sections of the new bobcat APUs, please change the theoretical GLOPS from 56 to 44.8 because this will accurately represent the performance based on clock speed in mhz times the number of shaders time 2(one multiply and one add per FMA)

173.22.91.148 (talk) 01:21, 14 December 2010 (UTC)

If you can bring a reliable citation, feel free to edit everything yourself. The semiprotection has been lifted at my request.--Prandr (talk) 12:01, 17 December 2010 (UTC)
Not done: See below. -- DQ (t) (e) 22:56, 17 December 2010 (UTC)

HD5870 Eyefinity6 TDP

Im been looking at conflicting info for this matter, Toms Hardware lists it as 34/228W (http://www.tomshardware.com/reviews/radeon-5870-eyefinity6,2595.html), though the refrence from AMDs sites asys 27/188 like the regular 5870. Logically 34/28W would be more correct, did AMD have a typo there? —Preceding unsigned comment added by 98.203.55.77 (talk) 03:46, 11 April 2010 (UTC)

AMD has updated the information to show the 228W figure Toms Hardware listed. AMD though does not list a minimum TDP on the same page. (http://www.amd.com/us/products/desktop/graphics/ati-radeon-hd-5000/hd-5870-eyefinity-6-edition/Pages/overview.aspx#2) (99.179.78.30 (talk) 06:11, 22 January 2011 (UTC))

FireStream workstation computing card missing

AMD FireStream, firestream_9270 Alinor (talk) 15:38, 21 January 2010 (UTC)

Done. Alinor (talk) 14:08, 6 February 2011 (UTC)

ATI Radeon HD 500v series

This series is actually rebranded Radeon HD 4 series with slightly higher clock speed. AMD classifies them as 540v series, 550v series... in their site. These series still supports DX10 while the HD 5000 series supports up to DX11. Why the 500v series is put on the 5000 series section in this article? -- Livy the pixie (talk) 09:45, 15 February 2011 (UTC)

Double Precision

This article lists the HD 6790 as not having DP. According to AMD's own product listing[2] under the AMD App Acceleration section it states "Double Precision Floating Point".

I would do the edit myself but this is my first time being "active" on Wikipedia.

Thanks in advance.

DarkWikiMuse (talk) 02:24, 5 July 2011 (UTC)

Column headings need to be in more places than just the top of the tables

Every four or five rows would probably be good. Otherwise, these charts are numbers without context more often than not. 99.88.142.167 (talk) 23:21, 2 September 2011 (UTC)

WiiU specs

What is the source of the wiiU gpu specs? especially the ram sounds fishy. — Preceding unsigned comment added by 130.234.180.172 (talk) 09:32, 5 September 2011 (UTC)

Re-organization and Correction of Data (Cleanup)

I'm going to make some major changes to the way the data is set up to make it more organized. I'm also going to do my best to correct any incorrect or misleading data. I've already changed up to R300. — Preceding unsigned comment added by Blound (talkcontribs) 02:59, 26 November 2011 (UTC)

not a bad thing that somebody is reorganizing this, but may i ask why you are deleting some of the miscellaneous infomation? Asdfsfs (talk) 14:50, 18 December 2011 (UTC)
noticed some things that are flat out wrong, for example memory bandwith on x1950 gt - was listed as 38.4 gb/s on the old version, but how come you state it's 64gb/s? the number is impossible because the memory is lower clocked than on pro which has 44 - perhaps a messup? if you're really trying to reorganize those pages it would be great if the numbers were still correct afterwards... Asdfsfs (talk) 22:28, 25 December 2011 (UTC)

Asdfsfs edits

I recently made some edits, which where reverted by User:Asdfsfs without explanation.

My edits are - adding architectures VLIW5/VLIW4/GCN, moving All-in-Wonder upwards to correspond to the release date of its most recent member (instead of leaving it at the bottom where the newest series are), clarifying the HD7000 marketing mess by adding a second header in the middle - like in the R600 series. Ianteraf (talk) 06:48, 8 January 2012 (UTC)

If you would bother to thoroughly read the differences you would see that I presented the architecture information in a much less awkward way. Concerning the 7000 thing - I think it's OK like it is now, because this line at least doesn't feature cards that start with a different number. Asdfsfs (talk) 15:48, 8 January 2012 (UTC)

Definitive TDP of the HD 7970

I've seen 250W, 225W and 210W mentioned on tech sites, this wiki page still says 250W, can someone clarify this issue and say with certainty what the TDP is? 195.169.213.92 (talk) 20:46, 10 January 2012 (UTC)

Radeon HD 6000-series and HD 7000-series

HD 6000 will be an architectural change rather than simply warp with more shader to increase performance. which the number of shader increase will be limited. —Preceding unsigned comment added by 70.131.56.98 (talk) 18:29, 11 September 2010 (UTC)

HD6770 1280sp/64tmu/16-32rop HD6750 1120sp/56tmu/16-32rop —Preceding unsigned comment added by 85.93.116.50 (talk) 09:31, 24 September 2010 (UTC)

Then, where are the specifications of Radeon HD 7000 series? — Preceding unsigned comment added by Lacp69 (talkcontribs) 14:20, 16 January 2012 (UTC)

Northern Island HD7000 specification

Many Wiki vandalist attempt to change the value of rops number in Tahiti XT from 48(or maybe 24) to 32 despite the bus is 384bit. However it is impossible to have 32 rops on a 384bit bus. — Preceding unsigned comment added by 70.131.80.64 (talk) 17:07, 17 December 2011 (UTC)

There are very many Wiki vandalist attempts to change the number specs of the Tahiti GPU (79x0). For example, the specs are 2048/128/32 for 7970 and 1792/112/32 for 7950. In regards to question about 32 rops, they are 32. Please refer to original ATI slides. — Preceding unsigned comment added by 129.78.32.21 (talk) 13:51, 21 December 2011 (UTC)

also someone has given high end 7xxx mobile gpus desktop specs, like 7850m/7870m look a bit much like the desktop variant — Preceding unsigned comment added by 158.37.228.14 (talk) 09:57, 8 March 2012 (UTC)

Rage 128 Ultra, what is it?

The Rage 128 Ultra is an OEM only version, but which one is it really? What did ATi do to it to make it not work with the reference drivers, forcing owners to use only the often out-dated OEM supplied drivers? Is it possible to hack the reference drivers to make them work with this chip? Dell and Gateway used the Rage 128 Ultra in many models of desktop PCs. Bizzybody (talk) 09:45, 24 April 2012 (UTC)

Standalone section for Compute Capability and OpenCL

I think someone should make standalone section for Compute Capability and OpenCL support on ATI cards. This section should include versions of CC,OpenCL and supported cards(optional with chip names - RV600). Little table with some informations for compute capabiliti is on Nvidia page(Comparsion of Nvidia GPUs). I wrote today someting similar for this Nvidia page. Sokorotor (talk) 15:49, 28 April 2010 (UTC)

For instance the 5800 series of GPU is listed in a table with the text "all supports OpenCL 1.1". True - but at least my 5850 card reports device support for OpenCL 1.2 with a recent AMD Linux driver (platform AMD-APP 923.1). I suppose supported OpenCL version usually depends more on the software than on the hardware. (I don't know if recent AMD driver supports 1.2 on all 5800 hardware though.) — Preceding unsigned comment added by 85.229.117.254 (talk) 20:50, 27 June 2012 (UTC)

HD7300M-HD7600M missing

The renamed HD5000/6000 40nm chips are already released to OEMs.[2], but currently the article lists only the not yet released 28nm chips. Ianteraf (talk) 08:28, 4 February 2012 (UTC)

7300M still missing. Ianteraf (talk) 12:47, 9 August 2012 (UTC)

Matthew Anthony Smith recently inserted a large amount of links to the http://www.techpowerup.com/ site into the table headers of many of the GPUs. I removed them as part of a quality assurance / cleanup effort which unfortunately became necessary after many controversial edits (in various articles) by this user.

  • These links just repeat the contents provided here already and therefore add no extra value to readers of this article. Also, they don't provide any information, which would not be available in many other places as well.
  • Wikipedia policies such as WP:EL restrict our usage of external links to certain, well-chosen cases. External links should be of particularly high quality. Links to forums and social media platforms are not normally allowed due to their short-lived nature and their typically low quality and their lack of editorial contents. Therefore, the inserted links to http://www.techpowerup.com/ in the table headers do not qualify as reliable reference, they do not even name sources or authors/editors, so this is simply nothing we can count on.
  • According to the Wikipedia Manual of Style, direct (or piped) links to external sources (as they were still common many years ago) are deprecated in article space for a long while and therefore should be avoided, in particular in headers. If we need to link to other sites, we should do it inside of references and use proper syntax. Alternatively, we could add them to the optional external links section, however, in this case, a single link to the home page of the database would be enough and we don't need dozens of individual links. See WP:LINK.

Personally, I think, we don't need any of these links at all, but if you think a link is useful, I suggest to add a single link to the database under "External links" again. Also, links to reliable references (as per WP policies) are acceptable as well inside the table if we use proper syntax.

Finally a note on the various "facts" templates I added to some of the table values. I did not want to blindly revert all the potentially problematic edits in one go, but found various table values or their semantics changed by Matthew Anthony Smith without any edit summary. Some values were simply changed, in some cases, footnotes were removed and in many cases lists of values and ranges were converted to look the same. I started to flag these changes in order to make readers aware of them (but there are many more). They need to be carefully checked by someone using a reliable reference and can be removed afterwards, ideally by providing the reference at the same time as well. Thanks. --Matthiaspaul (talk) 22:33, 13 September 2012 (UTC)

Missing 2012 APUs, Embedded, FirePro

Missing are many of the AMD Fusion APUs: HD7400D, HD7500D, HD7600D, HD7300, HD7500G, HD7600G, FirePro APU

Missing are Embedded GPUs, Brazoz APUs, Trinity APUs

Missing or wrong/preliminary data (e.g. display outputs of W9000) for W5000, W7000, W8000, W9000, W600. Ianteraf (talk) 13:10, 9 August 2012 (UTC)

Strike-trough are done. Ianteraf (talk) 06:56, 29 September 2012 (UTC)

Please sort all values logically, starting with the lowest !

Dear editors, when you enter new data, please sort all values by size/performance, starting with the lowest value or worst equipment, then citing the higher values or better equipment. All in logical order.

Some entries are a huge mess, since different values of one category are not sorted by size/performance.

Examples:

Wrong: "RAM: GDDR4, DDR2, GDDR3"

Wrong: "RAM: DDR2, GDDR4, GDDR3"

Right: "RAM: DDR2, GDDR3, GDDR4"


Thanks for your consideration. -- Alexey Topol (talk) 06:55, 30 September 2012 (UTC)

HD6000 cards with VGA output ?

The HD5000 series was built by several manufacturers with an integrated, dedicated VGA port, so that together with the DVI-I port and an adapter you could connect two VGA-monitors to one card. Will the HD6000 series cards, I mean some of them, also feature dedicated VGA ports, plus the DVI-I port ? -- Alexey Topol (talk) 00:01, 3 November 2010 (UTC)

In the meantime, many cards with dedicated VGA ports have come out in the HD6xxx series. But none of them features DVI-I ports, they all sport DVI-D ports only. -- Alexey Topol (talk) 06:59, 30 September 2012 (UTC)

Radeon HD 8470 (624 GFLOPS) vs. Radeon HD 8570 (560 GFLOPS)

Radeon HD 8470 is an Turks PRO and simliar to Radeon HD 7570 and Radeon HD 6570 - so why should an Radeon HD 8570 with an HIGHER core cycle and BETTER bus interface and same bus width only have an output of 560 GFLOPS. FLOPS are calculated by (but i can't do that). Can somebody please look after this miracle (and correct the numbers). Thanks -- 80.245.147.81 (talk) 14:45, 4 February 2013 (UTC)

this list is a rough draft, when I made it, it was to just give people the clocks and name, all of the calculated specs are either carried over or wrong, feel free to correct it. Matthew Smith (talk) 18:46, 4 February 2013 (UTC)


Ask AMD those performance Numbers of the 8570 & 8670 come straight from there lips there the only 2 on the list that are confirmed the source is here [3] Perhaps they've added something that helps improve performance & we might need a new calcuation for the GCN2 cards? — Preceding unsigned comment added by 58.178.251.175 (talk) 05:06, 5 February 2013 (UTC)

HD 8470 - 25% more cores HD 8570 - 12.3% more clock

There's your answer 25% more cores clocked at 12.3% slower still yields a gain.

All this had-wringing and nobody has pointed out that the units in the "definitive" formula above are not self-consistent. 87.112.175.80 (talk) 06:50, 12 February 2013 (UTC)

HD 2xxx to HD 6xxx shader count

In the following article (http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute) is explained that, in the architectures used up to Radeon HD 6xxx, the stream processors were VLIW (5, then 4). Each one comprised of 4 FPUs, 1 SFU (gone in the HD 69xx) and 1 branch unit. So the numbers present in the columns of the shaders in the tables are wrong. Either change them to simply FPUs or put the actual numbers of shaders (VLIW processors). This would explain why AMD's flagship chips have always been smaller than NVIDIA's ones and why, in contrast to higher theoretical peak performance (only achievable through independent float operations, not available in real world usage), NVIDIA's high-end chips have always been faster. — Preceding unsigned comment added by 93.38.171.227 (talk) 08:52, 19 February 2013 (UTC)


Those numbers come from AMD themselves.

Last poster, those were supposed to be tides(~), not dashes(-) to sign your post(at the end). FYI. Now for some clarification.

Nvidia's high end chips have NOT always been faster(geforce fx5900/5950 vs radeon 9800, geforce 6800 vs radeon x800/x850, geforce 7800/7900/7950 vs radeon x1800/x1900/x1950, gtx680 vs hd7970 ghz editon and gtx690 vs hd7990). Nvidia chips have been bigger in the past for several reasons, but not because each shader was much more individually capable(well, more effecient, but not more functional units). Larger sizes were due to various things such as the fully scalar FPUS requiring more in the way of extra data routing versus the simpler vliw5/vliw4 setups of AMD, more rops/wider memory interface, and older process tech(90nm vs 80nm, 65nm vs 55nm). The nvidia chips were faster at times due to faster shader clocks(2x-2.5x core clock due to simple single FPUs versus more complex vliw5/vliw4 in AMD. It seems like I just contradicted myself, but the small single FPUs of nvidia could run at high clocks because of their small size while the cluster in amd were limited in that regard. The flow of data in and out of amd cluster had less of an impact on overall transistor count than what was required to get the level of granular computing capable on nvidia products)as well as the previously mentioned more efficient architecture and additional rops/larger memory paths. Efficiency may be in question since the high clock speeds on the older shaders had a negative impact on power consumption vs amd parts at 55nm-40nm. Also some lower end parts on 80nm/65nm nvidia vs 65nm/55nm amd. With the latest chips from the 2 rivals, nvidia has gotten a lot more like amd(massively more shaders but at core clock and fewer rops/narrower memory interface) and amd has gotten more like nvidia(revamped shader clusters are now four GROUPS OF 16 SHADERS like the original g80/g90 vs the 16 groups of 4 in Cayman and a wider memory interface). IMO, amd would have been better off with the same old 256bit memory interface(see how well hd7870 does compared to hd7950). Then tahiti would have been smaller, cheaper to produce, and less power hungry(like gk104).Jtenorj (talk) 21:23, 11 March 2013 (UTC)

Unannounced/Rumoured products

The Radeon HD8000 section in particular is (and for a long time has been) full of unannounced products with rumoured specifications and prices (with "references" to rumour sites, as if that somehow legitimises it). When did Wikipedia become a dumping ground for rumour and speculation? 203.45.39.201 (talk) 05:14, 11 February 2013 (UTC)

There's actually a long history of this happening & it's usually right on the money. Most of the time there actually leaks not rumors. — Preceding unsigned comment added by 211.26.171.159 (talk) 12:31, 1 April 2013 (UTC)

7970 launch date

Wasn't launched on Dec 21st 2011 instead of Jan 9, 2012? http://www.techpowerup.com/reviews/AMD/HD_7970/ — Preceding unsigned comment added by 193.109.40.21 (talk) 13:31, 19 March 2013 (UTC)

Paper Launch - December 22nd, 2011


Retail availability - January 9, 2012 76.118.213.137 (talk) 02:21, 18 April 2013 (UTC)

FirePro W10000

The entry for the FirePro W10000 looks a bit off: it's incredibly unlikely, as it's basically an AMD version of the Titan/GK110, despite the fact that AMD has denied multiple times that such a GPU is being made; not to mention the lack of so much as a credible rumor to prove its existance, at least that I've seen. Interestingly, its listing is quite similar to what was previously -and incorrectly- listed as the HD 7990 Malta, albeit with a 384-bit GDDR6 interface, rather than the false Malta's 512-bit GDDR5. Until more credible information is available on this card, I feel that it should be removed from the listing. GungnirInd (talk) 23:48, 28 April 2013 (UTC)

I blame http://www.techpowerup.com/gpudb/2366/radeon-hd-9970.html (Atlthough that says it was based on speculation don't know where they got it from) Although that has it listed as a Volcanic Island card. Not a 7000 or 8000 series by the time it comes out Nvidia will have a shiny New card that's better then Titan. (Maxwell)

GDDR6 has been rumoured for a very long time. The specs are almost dead on for someone who slapped 3 7790s together apart from the changed memory. I guess it's plausible but one would hope that if it is a Volcanic Island card AMD would be shooting higher then that for them as well. I'll remove it for now we can always readd it later if it dose come out. — Preceding unsigned comment added by 210.50.139.54 (talk) 17:40, 4 May 2013 (UTC)