Message boards : Graphics cards (GPUs) : AMD GPU Status for 2013?
Author | Message |
---|---|
Hi guys/girls, | |
ID: 28093 | Rating: 0 | rate: / Reply Quote | |
Hi guys/girls, Between drivers 12.10 and 13.1 there have been 2 big speed increases in AMDs Open_CL. The first bump was about 20% and the second bump was about 10%. Using anything later than 12.11 beta 8 now makes the equivalent (to NVIDIA) AMD cards the fastest at POEM. | |
ID: 28117 | Rating: 0 | rate: / Reply Quote | |
Using anything later than 12.11 beta 8 now makes the equivalent (to NVIDIA) AMD cards the fastest at POEM. I've been "nVidia fanboi" for a long time but price and income realities are forcing me to look again at AMD. Just curious, how are you assessing equivalency in the above statement, equivalent price? equivalent numbers of "cores" or whatever the proper term is? (Of all the sub-topics in BOINC, GPU is the one I know least about, trying to catch up tho'). | |
ID: 28141 | Rating: 0 | rate: / Reply Quote | |
I can't speak for Beyond, but I think he determines equivalency by looking at the top single GPU AMD and Nvidia cards, which are the 7970 and the 680, and saying those are "equivalent" and then the 670 and the 7950 are, and so on. Before AMD dropped their prices, I'm pretty sure the 7970 and 680 were very close in price and close in performance, so they could be considered equivalent. However, I've heard AMD GPU's can overclock more and they are cheaper in price, so some can argue that GPUs like the 7970 and 680 aren't equivalent. | |
ID: 28142 | Rating: 0 | rate: / Reply Quote | |
Yeah, I'd base it roughly on price as well. If one consumes significantly more power this might also need to be factored in (for crunching), but in the current generation both are rather efficient. | |
ID: 28166 | Rating: 0 | rate: / Reply Quote | |
Using anything later than 12.11 beta 8 now makes the equivalent (to NVIDIA) AMD cards the fastest at POEM. Pretty much what Dylan and ETA said. Also take a look at the top computers at POEM, dominated by AMD GPUs. There's quite a number of other projects where that's also true. | |
ID: 28173 | Rating: 0 | rate: / Reply Quote | |
Thanks for the info, all of you. | |
ID: 28174 | Rating: 0 | rate: / Reply Quote | |
There's an official overclocked 7970 called the 7970 GHz Edition also offered by AMD, and a 7990 card made by other companies that is like the 690, where it has two dual (7970) GPUs in it. | |
ID: 28176 | Rating: 0 | rate: / Reply Quote | |
IMO the dual GPU cards (both NVIDIA & AMD) have had a troublesome history. I'd recommend 2 7970s over a dual GPU model. The 7970 has also come down in price a lot. | |
ID: 28179 | Rating: 0 | rate: / Reply Quote | |
What sort of troubles? | |
ID: 28183 | Rating: 0 | rate: / Reply Quote | |
> What sort of troubles? | |
ID: 28193 | Rating: 0 | rate: / Reply Quote | |
Maybe a little bit about brands from just one persons experience. I've been running an average of 16+ GPUs for the last few years. I've had 4 ASUS cards, of those 1 had a fan failure and 2 5850 cards had complete failures. They were sent in for RMA 2 months ago and I still haven't received replacements. The 4th ASUS I sold. Had a couple of EVGA cards, they're still OK. One Diamond, OK. One Sapphire, OK. Three MSI, all OK. Three Galaxy, all OK. One HIS 5850, fan failed and they refused to send a replacement fan so it was sent to them for RMA at the same time as the ASUS, still haven't gotten that one back. I've had more XFX cards than all the other brands combined. I bought so many XFX strictly due to the double lifetime warranty (check the model, not all have the super warranty). Of those have had just 2 fan failures, both on 5850 cards. Both times contacted XFX support and they had complete new heatsink/fan assemblies at my door in 2-3 days, no cost or hassle to me. Also, my neighbor's running 4 XFX cards, no problems there either. | |
ID: 28194 | Rating: 0 | rate: / Reply Quote | |
Thanks for your opinion and the info about the types of problems. I have my GTX 570 running at 40C to 49C because it's in a cabinet I built that is air conditioned with cold winter air. In addition to that the GPU vents into a duct that carries the exhaust directly to either the outdoors in summer or into the furnace's cold air return duct in winter to heat the rest of the house rather than the cabinet and computer room. If I get a 7990 it will go into the same cabinet. I know I can sustain that operating temperature all summer, no problem, by patching in a small air conditioning unit that will supply the cold air mother nature so kindly provides now. It's a small AC unit and it needs to cool a very small volume so it will run very cheap. In my mind that solves the VRMs overheating problem. | |
ID: 28195 | Rating: 0 | rate: / Reply Quote | |
Thanks for your opinion and the info about the types of problems. I have my GTX 570 running at 40C to 49C because it's in a cabinet I built that is air conditioned with cold winter air. In addition to that the GPU vents into a duct that carries the exhaust directly to either the outdoors in summer or into the furnace's cold air return duct in winter to heat the rest of the house rather than the cabinet and computer room. If I get a 7990 it will go into the same cabinet. I know I can sustain that operating temperature all summer, no problem, by patching in a small air conditioning unit that will supply the cold air mother nature so kindly provides now. It's a small AC unit and it needs to cool a very small volume so it will run very cheap. In my mind that solves the VRMs overheating problem. Nice system. Mine is not so high tech, but works. In the winter the computers heat the whole (largish) house. I have them spread around strategically and the forced air fan distributes heat well. My furnace is combination wood and off peak electric and last winter was not used at all except for air distribution (and this is Minnesota). The electric company called and apparently didn't believe my explanation so sent out a repairman because they thought my off-peak electric meter must be broken. It wasn't. This winter (because of a couple cold snaps) have burned wood for parts of 3 days, no electric. In the summer I have a whole house fan that circulates a lot of air. Only used air conditioning two days last summer and that was because I felt sorry for Cocoa (see avatar). Regards/Beyond | |
ID: 28196 | Rating: 0 | rate: / Reply Quote | |
While I'm in a talkative mode: most of my PCs are running 2 GPUs per machine. For a year or two ran 2 boxes with 3 GPUs each. You know what, it was a PITA. Limitations in BOINC, more driver issues, higher temperatures. Not worth it for me. The 2 GPU boxes easily handle ANY summer heat and are quieter due to lower fan speeds. The best way IMO to run the 2 GPU boxes is with one ATI/AMD and 1 NVIDIA. That way no need for BOINC exclusions which are problematical due to BOINC bugs. With 1 of each it's easy to set a different project per GPU, no fuss, no muss. POEM is inefficient when trying to run it on more than one GPU per machine, but for instance running the ATI on POEM and the NVIDIA on GPUGrid works perfectly. Over and out... | |
ID: 28197 | Rating: 0 | rate: / Reply Quote | |
Nice discussion and a pretty nice solution @ Beyond! | |
ID: 28201 | Rating: 0 | rate: / Reply Quote | |
My "cooler cabinet" may sound high tech but it's not. I built 90% of it from scrap that cost me nothing; the remainder was about $20 worth of new clothes dryer vent duct. Even the paint I'm going to paint it with was scrounged for free. As soon as I paint it up pretty I'm going to disassemble it and make a video or series of photos while reassembling to show how I did it. The basic principle is... don't let the heat mix into the room and heat it up and then expect to use that hot air to cool stuff off, contain the heat the instant it comes out of the machine and deal with it sensibly. The alternative is monster, high dollar heat sink/fan combos or liquid cooling. All my gear works with stock fans and heat sinks. Like I keep telling mitrichr, brains (thinking), not money, is the answer. I'm not a brainiac, I just think a lot :-) Cuz I have no money!! | |
ID: 28203 | Rating: 0 | rate: / Reply Quote | |
Nice discussion and a pretty nice solution @ Beyond! Thanks! As far as projects (personally I like MW, Collatz, PG and Moo is OK too)(and don't forget Donate, the GPUGrid sister project), the tide is moving toward Open_CL. CAL is dead and both AMD and NVIDIA seem to be allocating more of their resources to the common language. Open_CL has recently made large performance strides on both platforms and is getting better with every release. It's just a matter of time IMO. Regards/Beyond | |
ID: 28204 | Rating: 0 | rate: / Reply Quote | |
Hi All, | |
ID: 28218 | Rating: 0 | rate: / Reply Quote | |
Dagorath: If I could put a GPU on a Rasp-Pi I would do it, assuming there was the required app and other pre-requisites. Just enough CPU to drive a whopping GPU... that's the goal. We build our own systems for our lab that are heavy on GPU and light on CPU. We also designed a custom case to deal with the cooling problems. I'll post more details if anyone's interested. The cost of a single socket host system isn't too painful when amortised over 4 GPUs, and it's a lot less painful than making custom motherboards.. MJH | |
ID: 28219 | Rating: 0 | rate: / Reply Quote | |
@MJH | |
ID: 28222 | Rating: 0 | rate: / Reply Quote | |
More than happy to see this photo of lab hardware. | |
ID: 28228 | Rating: 0 | rate: / Reply Quote | |
I'd love to see some pictures too. MJH, you do realize that your goals are different than ours (most of us). While you design for one project, our systems are designed to run many projects with many different needs. We also run CPU projects at the same time, generally on all cores. Projects like POEM need massive CPU support. Other GPU projects need almost none. Most scientific projects are CPU only so we need to be able to support those too. Most of my GPUs are ATI so I ran up 500,000,000 credits on Donate, hopefully it helped a little to fund a budding researcher. But there are so many good projects to support... | |
ID: 28231 | Rating: 0 | rate: / Reply Quote | |
Off-peak electric is a scam. Most people revert to their old habits, and the electric companies know that - that's why they turned up, and left in shock. I think it's only GPU crunchers that can actually benefit. | |
ID: 28236 | Rating: 0 | rate: / Reply Quote | |
On the bright side... | |
ID: 28237 | Rating: 0 | rate: / Reply Quote | |
Projects like POEM need massive CPU support. POEM needs in first line much RAM Bandwidth ;) i tried that with underclocking and overclocking a dual core CPU. with no in/decrease of GPU Load. But when i changed only some mhz in the ram Clock it de/increased massive. @ evil penguin: i wish folding@home would go back to BOINC :/ @ skygiven: huh? MW has not really interupted here since months..since they got the latest GPU Problem out. ____________ DSKAG Austria Research Team: http://www.research.dskag.at | |
ID: 28238 | Rating: 0 | rate: / Reply Quote | |
Hmmm, maybe AMD wasn't such a sound purchase after all. Seems like there are some options for it however. I couldn't give 2 hoots about the credits so Albert or Einstein will do fine, maybe Folding@home is where it will go. I wouldn't crunch MW or SETI if they were the last projects standing in fact the sooner they blow up and never return the better. I'll put my 7970 to good use somewhere when it arrives. Optimal crunching? Well, it's a worthy goal but life is too short to lose sleep over it. I guess that's easier to say when one gets electricity as cheap as I do. | |
ID: 28239 | Rating: 0 | rate: / Reply Quote | |
Your HD7970 will do fine at POEM, and Einstein, Albert or Folding would be good backup projects. I haven't really looked at Folding for a long time. The last time I was there I think a high end GPU could only match the performance of a high end CPU, which seemed a bit of a waste to me. | |
ID: 28244 | Rating: 0 | rate: / Reply Quote | |
> I wouldn't crunch MW or SETI if they were the last projects standing in | |
ID: 28254 | Rating: 0 | rate: / Reply Quote | |
Your HD7970 will do fine at POEM, and Einstein, Albert or Folding would be good backup projects. I haven't really looked at Folding for a long time. The last time I was there I think a high end GPU could only match the performance of a high end CPU, which seemed a bit of a waste to me. Now that both the GPU and CPU clients can handle the same type of work, GPU WUs are using a "unified GPU/SMP benchmarking scheme". http://foldingforum.org/viewtopic.php?f=66&t=22808 A GTX 570 was getting over 150k PPD. Crazy high bump in points. | |
ID: 28255 | Rating: 0 | rate: / Reply Quote | |
> I wouldn't crunch MW or SETI if they were the last projects standing in It's run by a dirty skank who steals crunchers away from other projects by paying exhorbitant (understatement) credits. It's a rogue project. | |
ID: 28256 | Rating: 0 | rate: / Reply Quote | |
Maybe, but I think that the goal of the project is more important than the credits. Furthermore, I would say that MW has a better goal than SETI, because it can be useful, however I view SETI as almost throwing work away looking for sentient alien signals. I would crunch for them if they were the "last" project available, but there are more dire issues that need to be solved. | |
ID: 28258 | Rating: 0 | rate: / Reply Quote | |
That's the same kind of flawed logic that says "During WWII we developed electric welding equipment and procedures to speed the joining of metals for the purpose of building ships faster, lighter and stronger therefore we should have another war so we can develop even better electric welding equipment and procedures." (Lincoln electric did in fact develop DC welding machines, extruded flux coated electrodes and low-hydrogen welding rod in response to war time ship building needs and is credited in many circles with making D-day possible. Prior to low-hydrogen electrodes ships broke apart at sea due to welds failing from hydrogen embrittlement. Lincoln electric did the research, found the cause of the weld failures and engineered the solution... 7018 welding electrode which is now the workhorse of the welding industry.) | |
ID: 28259 | Rating: 0 | rate: / Reply Quote | |
Alas, this thread has decayed into another credits debate. | |
ID: 28260 | Rating: 0 | rate: / Reply Quote | |
> I wouldn't crunch MW or SETI if they were the last projects standing in Hum? A high end ati card there (who uses nvidia there?!) gets only a bit mor points then a high end nvidia card in gpugrid ( youncant use the ati here expect donat@home). So i dont think they steal cruncher away. And i think it is better to run MW then prime or collaz ;) due its opencl 1.0 one of my 4850 cards can crunch only mw as the only science project ( expect ....... Seti.....). I would through it away before calculating on prime or collaz ;) ____________ DSKAG Austria Research Team: http://www.research.dskag.at | |
ID: 28261 | Rating: 0 | rate: / Reply Quote | |
Alas, this thread has decayed into another credits debate. If the solution really were that simple it would have been implemented long ago. The stumbling block is measuring GPU performance. The stumbling block for CPU related credits is the same... measuring CPU performance. They tried it and there were glaring anomalies and inconsistencies that nobody was happy with. And on top of all that any client side measure of performance is open to cheating. This project utilizes the GPU more than POEM, Albert, Einstein and several other GPU projects. The power draw is higher, and it's CUDA based which means its as complex as it gets. It's sad that less useful projects that don't utilizes the GPU well, perform less complex calculations can pay stupid amounts of credits, but what's really sad is that people crunch for them and think they are achieving something. It's the lemming mentality. People are told "this is how you do it... you get the credits then you put them in a big annoying sig and then your wretched pointless life suddenly has quantified meaning". To be accepted into the herd they don't question they just act like all the other water buffalo and chant the mantra "moo, moo, I cannot crunch without my credits, moo, moo". You can certainly learn lessons about GPU crunching and develop code, but finding the millionth digit of Pi isn't finding a cure for Cancer, or a drug treatment strategy. I think the banks are getting enough support from us without us spending money finding new primes for them for their high security transactions, I believe you're referring to the drug money laundering so many of them have been implicated in recently? Or is it their promoting war and conflict so they can rake in huge profits financing those wars that irks you? I kind of favour cancer and AIDS research too but the counter argument to that is that if people were more careful about what they put in their mouths and where they put their willys there would be far less cancer and AIDS. I hate sticking my nose into other peoples' affairs but they make it my business when they expect my tax dollar to bail them out of trouble. | |
ID: 28262 | Rating: 0 | rate: / Reply Quote | |
> I wouldn't crunch MW or SETI if they were the last projects standing in It sounds like they finally mended their ways. Thanks for the info. Well, as long as that skank who started their high credit policy is on MW staff I won't crunch there. Anybody who steals crunchers the way she used to isn't very intelligent in my books so I doubt her research can have any merit. | |
ID: 28263 | Rating: 0 | rate: / Reply Quote | |
Don't want to be argumentative but as someone who's run pretty much all of the major GPU projects (a lot and for a long time), in the earlier days MW credits were (and are) slightly higher than Collatz which was arguably because they were using double precision (which is why NVidia doesn't do so well at MW). MW credits were and are well below those in Donate, DistrRTgen, POEM, Moo and the early days of PrimeGrid (until PG lowered theirs). | |
ID: 28268 | Rating: 0 | rate: / Reply Quote | |
'World Community Grids' Help conquer cancer project uses AMD & Nvidia gpus. It's due to finish in about 5 months. An HD7970, using an app_config file, can get 140,000-150,000 boinc points/day. A GTX580 will only get ~25,000. | |
ID: 28269 | Rating: 0 | rate: / Reply Quote | |
It is a Bio-science project and after POEM probably the best place to aim your ATI cards. 150K isn't bad (probably better than Einstein and Albert) but you would get far more at POEM; ~1M for an HD7970 (with the right setup)! | |
ID: 28272 | Rating: 0 | rate: / Reply Quote | |
Don't want to be argumentative Well, why not? We can argue and still be friends. but as someone who's run pretty much all of the major GPU projects (a lot and for a long time), in the earlier days MW credits were (and are) slightly higher than Collatz which was arguably because they were using double precision (which is why NVidia doesn't do so well at MW). MW credits were and are well below those in Donate, DistrRTgen, POEM, Moo and the early days of PrimeGrid (until PG lowered theirs). The exhorbitant credit rates I mentioned were prior to their providing a GPU app. They may well have changed their ways before bringing GPUs on stream, I'm not sure. I quit the project in disgust and have never taken a second look at them. Ok, I admit I hold a grudge perhaps longer than I ought to but I recall the blatant lies and deceit they used to justify what they were doing and I despise them for it. Edit: I see credits in the same light as playing a game. Only the object is to collect as many credits as possible. Adds a flavor of competition and helps many science projects. The difference as opposed to other games is that you're helping to promote science. A neat idea if you ask me... It's a good analogy except in a game of Monopoly, for example, there are no consequences outside the game. In crunching there are real life consequences outside the credit game and some of those are negative for people who don't deserve to be abused. For example, the pursuit of credits recently at Oproject@home led one volunteer (perhaps more, not sure) to cheat in a way that injected thousands of bogus results into the project's result database jeopardising the research and possibly making the entire batch of results worthless, depends if the admin can filter out the bogus results. I have many hundreds of hours worth of results in that database and I am not pleased that one credit whore twit has risked all that. Indeed one can argue that project admins/devs should take measures against such horrible actions but not every project has the time and manpower to do that. One of BOINC's reasons for existing is to assist small projects just like Oproject, not put them in harm's way. Another negative consequence is that projects do what MW and others do and steal volunteers away by doling out high credits. There will always be competition between researchers for funding but I will not allow that to creep into the BOINC world if I can do anything about it. I want BOINC to be a safe haven for projects where they don't have to compete that way, where they are not at risk, where they can devote their limited resources to their research. You see I am all in favour of people having fun and doing what they want as long as it doesn't hurt anybody else. To me it seems like credits have a lot of negative consequences and I just don't think the benefits derived from credits outweigh the bad aspects. I do not accept the hypothesis that tons of crunchers will stop crunching if credits are abolished and I intend to prove that hypothesis is wrong by conducting a poll to gauge the community's thoughts on the matter. Once that information is known we will proceed accordingly to stop the crap that is going on as a result of this totally broken institution called credits. I wish it would work, I genuinely do, but I think the evidence suggests it cannot work, not ever, too much entropy in the system. Again, if there were no bad consequences I would say let it be and let the credit whores have their fun but it's hurting people who don't deserve to be hurt. We're feeding all of this, slowly, to the man himself... DA.... and so far it seems like he is ready to maybe, said maybe, make a move on this though there is much to negotiate and common ground still to be found. He does not like what's going on either but feels somewhat powerless to stop it, maybe even feels it's not his place to stop it, you know, he and Rom bust their butts just to provide the software, a monumental task in itself and I think they don't have time to regulate its use. Should it be regulated? More important, can it be regulated? Depends who is willing to take the matter in hand and take the power, hmmm?. Like a good game of Euchre take the power if you think you can. | |
ID: 28273 | Rating: 0 | rate: / Reply Quote | |
Hi All, I don't understand this. What is "Nvidia application"? Do you mean CUDA? I though both nVidia and AMD both depreciated their CUDA and CAL, in favor of OpenCL. Furthermore, even if CUDA is not yet dead, it surely will be eventually. Time to stop optimizing a dead end, and make OpenCL work. Or there will be no cards that can crunch GPUGRID in the end. | |
ID: 28315 | Rating: 0 | rate: / Reply Quote | |
Nope. AMD left CAL in favor for OpenCL, whereas nVidia does OpenCL because they have to. They prefer CUDA and put as much weight behind it as they can. Currently it's much better than OpenCL and it's got more potential regarding optimization for nVidias own hardware. These are substantial advantages over OpenCL, which itself obviously has the advantage of "running everywhere". But how good does it run compared to optimized software | |
ID: 28323 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : AMD GPU Status for 2013?