Advanced search

Message boards : Graphics cards (GPUs) : AMD GPU Status for 2013?

Author Message
Evil Penguin
Avatar
Send message
Joined: 15 Jan 10
Posts: 42
Credit: 18,255,462
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 28093 - Posted: 20 Jan 2013 | 22:06:48 UTC

Hi guys/girls,
It's been a long while since we've had any news regarding AMD GPU crunching. Surely since the HD 7000 series release a year ago there have been improvements to AMD's OpenCL runtime?

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28117 - Posted: 22 Jan 2013 | 14:50:31 UTC - in response to Message 28093.

Hi guys/girls,
It's been a long while since we've had any news regarding AMD GPU crunching. Surely since the HD 7000 series release a year ago there have been improvements to AMD's OpenCL runtime?

Between drivers 12.10 and 13.1 there have been 2 big speed increases in AMDs Open_CL. The first bump was about 20% and the second bump was about 10%. Using anything later than 12.11 beta 8 now makes the equivalent (to NVIDIA) AMD cards the fastest at POEM.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28141 - Posted: 23 Jan 2013 | 1:26:19 UTC - in response to Message 28117.

Using anything later than 12.11 beta 8 now makes the equivalent (to NVIDIA) AMD cards the fastest at POEM.


I've been "nVidia fanboi" for a long time but price and income realities are forcing me to look again at AMD. Just curious, how are you assessing equivalency in the above statement, equivalent price? equivalent numbers of "cores" or whatever the proper term is? (Of all the sub-topics in BOINC, GPU is the one I know least about, trying to catch up tho').

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 28142 - Posted: 23 Jan 2013 | 1:51:34 UTC

I can't speak for Beyond, but I think he determines equivalency by looking at the top single GPU AMD and Nvidia cards, which are the 7970 and the 680, and saying those are "equivalent" and then the 670 and the 7950 are, and so on. Before AMD dropped their prices, I'm pretty sure the 7970 and 680 were very close in price and close in performance, so they could be considered equivalent. However, I've heard AMD GPU's can overclock more and they are cheaper in price, so some can argue that GPUs like the 7970 and 680 aren't equivalent.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28166 - Posted: 23 Jan 2013 | 20:44:04 UTC

Yeah, I'd base it roughly on price as well. If one consumes significantly more power this might also need to be factored in (for crunching), but in the current generation both are rather efficient.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28173 - Posted: 24 Jan 2013 | 1:19:13 UTC - in response to Message 28141.

Using anything later than 12.11 beta 8 now makes the equivalent (to NVIDIA) AMD cards the fastest at POEM.

I've been "nVidia fanboi" for a long time but price and income realities are forcing me to look again at AMD. Just curious, how are you assessing equivalency in the above statement, equivalent price? equivalent numbers of "cores" or whatever the proper term is? (Of all the sub-topics in BOINC, GPU is the one I know least about, trying to catch up tho').

Pretty much what Dylan and ETA said. Also take a look at the top computers at POEM, dominated by AMD GPUs. There's quite a number of other projects where that's also true.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28174 - Posted: 24 Jan 2013 | 2:14:00 UTC - in response to Message 28173.

Thanks for the info, all of you.

Mmmmm, I see... the 60 top POEM hosts are all AMD and all Windows too. I see only 1 using AMD on Linux. IIRC, the AMD drivers for Linux were not that great a while back. It has to work on Linux, not work perfect but decent. Has that situation improved or was it never as bad as I thought it was? Troublesome installs are troublesome but never a show stopper for me, if it's good it's worth fighting for.

Got the dough ready for a GTX 690... real dough not plastic. Looks like I can get 2 AMD 7970 for roughly the same money. Is that the top of the line AMD... 7970?

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 28176 - Posted: 24 Jan 2013 | 2:19:24 UTC
Last modified: 24 Jan 2013 | 2:21:18 UTC

There's an official overclocked 7970 called the 7970 GHz Edition also offered by AMD, and a 7990 card made by other companies that is like the 690, where it has two dual (7970) GPUs in it.


Check for yourself.


http://www.amd.com/us/products/desktop/graphics/Pages/desktop-graphics.aspx


P.S. The link doesn't open a new tab.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28179 - Posted: 24 Jan 2013 | 8:32:31 UTC

IMO the dual GPU cards (both NVIDIA & AMD) have had a troublesome history. I'd recommend 2 7970s over a dual GPU model. The 7970 has also come down in price a lot.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28183 - Posted: 24 Jan 2013 | 9:49:34 UTC

What sort of troubles?

I have a 7990 and 1KW PSU in my shopping cart at Newegg and would have attempted a checkout but I know I am $5 shy in my bank account. Tomorrow I'll deposit cash to make up the difference. Maybe that was a fortuitous delay.

I've been planning a purchase for months and have been torn between various high end CPU combinations along with GPU. Recently I decided my strategy would be cheap low end CPUs with top o' the line GPUs since GPUs have such incredible compute power. The cheaper mobos, for example the one I already own and plan to put a GPU into, have but 1 PCI-E x16 slot so a dual GPU card is the only way to put 2 GPUs onto such a mobo.

The next mobo I buy will likely then have 2 x16 slots.

If I could put a GPU on a Rasp-Pi I would do it, assuming there was the required app and other pre-requisites. Just enough CPU to drive a whopping GPU... that's the goal. I think there might be a niche market there, small but somewhat profitable, for anyone who could engineer a $100 "GPU driver" that would have 0 frills and just enough to drive one or two GPUs. I'm talking NFS storage, IPX boot and not much more. The target market would be BOINCers, the Folding@home crowd and distributed.net crowd. There may be others. Send me schematics, I might be able to prototype it in my PCB shop.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28193 - Posted: 24 Jan 2013 | 15:45:34 UTC

> What sort of troubles?

VRMs overheating and driver problems that don't plague single GPUs. Maybe they've ironed things out in the new generation but doubt it. These problems are true for both AMD & NVIDIA.

For $800 AR you can get 2 of the 7970 1000MHz XFX cards with the double lifetime warranty:

http://www.newegg.com/Product/Product.aspx?Item=N82E16814150586

The 7990 will cost you $100 more and only runs at 900/925MHz with very little OC headroom (the 7970 should OC far better). Just my opinion...

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28194 - Posted: 24 Jan 2013 | 16:02:49 UTC
Last modified: 24 Jan 2013 | 16:14:53 UTC

Maybe a little bit about brands from just one persons experience. I've been running an average of 16+ GPUs for the last few years. I've had 4 ASUS cards, of those 1 had a fan failure and 2 5850 cards had complete failures. They were sent in for RMA 2 months ago and I still haven't received replacements. The 4th ASUS I sold. Had a couple of EVGA cards, they're still OK. One Diamond, OK. One Sapphire, OK. Three MSI, all OK. Three Galaxy, all OK. One HIS 5850, fan failed and they refused to send a replacement fan so it was sent to them for RMA at the same time as the ASUS, still haven't gotten that one back. I've had more XFX cards than all the other brands combined. I bought so many XFX strictly due to the double lifetime warranty (check the model, not all have the super warranty). Of those have had just 2 fan failures, both on 5850 cards. Both times contacted XFX support and they had complete new heatsink/fan assemblies at my door in 2-3 days, no cost or hassle to me. Also, my neighbor's running 4 XFX cards, no problems there either.

Regards/Beyond

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28195 - Posted: 24 Jan 2013 | 16:48:21 UTC - in response to Message 28193.

Thanks for your opinion and the info about the types of problems. I have my GTX 570 running at 40C to 49C because it's in a cabinet I built that is air conditioned with cold winter air. In addition to that the GPU vents into a duct that carries the exhaust directly to either the outdoors in summer or into the furnace's cold air return duct in winter to heat the rest of the house rather than the cabinet and computer room. If I get a 7990 it will go into the same cabinet. I know I can sustain that operating temperature all summer, no problem, by patching in a small air conditioning unit that will supply the cold air mother nature so kindly provides now. It's a small AC unit and it needs to cool a very small volume so it will run very cheap. In my mind that solves the VRMs overheating problem.

The driver issues is something I cannot control so I'm going to avoid putting myself at the mercy of AMD or nVidia. I could wait until the card becomes obsolete before driver issues get fixed and I don't need that crap.

For now I'll buy just 1 X 7970 and put it in my host that doesn't have a GPU. I won't even need a new PSU to do that. Soon enough I'll be able to get a mobo with 2 PCI-E 3.0 x16 slots and put 2 GPUs in that.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28196 - Posted: 24 Jan 2013 | 18:12:28 UTC - in response to Message 28195.

Thanks for your opinion and the info about the types of problems. I have my GTX 570 running at 40C to 49C because it's in a cabinet I built that is air conditioned with cold winter air. In addition to that the GPU vents into a duct that carries the exhaust directly to either the outdoors in summer or into the furnace's cold air return duct in winter to heat the rest of the house rather than the cabinet and computer room. If I get a 7990 it will go into the same cabinet. I know I can sustain that operating temperature all summer, no problem, by patching in a small air conditioning unit that will supply the cold air mother nature so kindly provides now. It's a small AC unit and it needs to cool a very small volume so it will run very cheap. In my mind that solves the VRMs overheating problem.

Nice system. Mine is not so high tech, but works. In the winter the computers heat the whole (largish) house. I have them spread around strategically and the forced air fan distributes heat well. My furnace is combination wood and off peak electric and last winter was not used at all except for air distribution (and this is Minnesota). The electric company called and apparently didn't believe my explanation so sent out a repairman because they thought my off-peak electric meter must be broken. It wasn't. This winter (because of a couple cold snaps) have burned wood for parts of 3 days, no electric. In the summer I have a whole house fan that circulates a lot of air. Only used air conditioning two days last summer and that was because I felt sorry for Cocoa (see avatar).

Regards/Beyond

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28197 - Posted: 24 Jan 2013 | 18:32:09 UTC

While I'm in a talkative mode: most of my PCs are running 2 GPUs per machine. For a year or two ran 2 boxes with 3 GPUs each. You know what, it was a PITA. Limitations in BOINC, more driver issues, higher temperatures. Not worth it for me. The 2 GPU boxes easily handle ANY summer heat and are quieter due to lower fan speeds. The best way IMO to run the 2 GPU boxes is with one ATI/AMD and 1 NVIDIA. That way no need for BOINC exclusions which are problematical due to BOINC bugs. With 1 of each it's easy to set a different project per GPU, no fuss, no muss. POEM is inefficient when trying to run it on more than one GPU per machine, but for instance running the ATI on POEM and the NVIDIA on GPUGrid works perfectly. Over and out...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28201 - Posted: 24 Jan 2013 | 21:31:57 UTC

Nice discussion and a pretty nice solution @ Beyond!

Regarding buying many more AMDs: I'd caution against that. While POEM runs great on them, POEM used to be short on work and may not recover quickly. If you run out of POEM WUs there are not too many attractive alternatives left. Mine runs Milkyway as a backup, not sure about Seti and Einstein. But all the "classical" projects AMDs could run since some time (Collatz, PG, Moo) I consider.. well, not very useful.

A single HD7970 won't hurt, though. Have fun with it.. it can either be a beast (~1.17 V, 1.2+ GHz) or run rather efficiently at 0.9 - 1.0 GHz with significantly less voltage :)

MrS
____________
Scanning for our furry friends since Jan 2002

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28203 - Posted: 24 Jan 2013 | 21:51:30 UTC

My "cooler cabinet" may sound high tech but it's not. I built 90% of it from scrap that cost me nothing; the remainder was about $20 worth of new clothes dryer vent duct. Even the paint I'm going to paint it with was scrounged for free. As soon as I paint it up pretty I'm going to disassemble it and make a video or series of photos while reassembling to show how I did it. The basic principle is... don't let the heat mix into the room and heat it up and then expect to use that hot air to cool stuff off, contain the heat the instant it comes out of the machine and deal with it sensibly. The alternative is monster, high dollar heat sink/fan combos or liquid cooling. All my gear works with stock fans and heat sinks. Like I keep telling mitrichr, brains (thinking), not money, is the answer. I'm not a brainiac, I just think a lot :-) Cuz I have no money!!

Your method of spreading the heat around the house is a good idea too. Very cute dog, btw :-)

Thanks for the tip on 1 AMD plus 1 nVidia per box. From various discussions I've lurked I kind of gathered that was a good way but never asked because never needed to. I'm going to plan my future purchases and expansions on that principle.

Over and out for me too, sorry for the thread hijack guys.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28204 - Posted: 24 Jan 2013 | 22:57:00 UTC - in response to Message 28201.

Nice discussion and a pretty nice solution @ Beyond!

Regarding buying many more AMDs: I'd caution against that. While POEM runs great on them, POEM used to be short on work and may not recover quickly. If you run out of POEM WUs there are not too many attractive alternatives left. Mine runs Milkyway as a backup, not sure about Seti and Einstein. But all the "classical" projects AMDs could run since some time (Collatz, PG, Moo) I consider.. well, not very useful.

A single HD7970 won't hurt, though. Have fun with it.. it can either be a beast (~1.17 V, 1.2+ GHz) or run rather efficiently at 0.9 - 1.0 GHz with significantly less voltage :)

MrS

Thanks! As far as projects (personally I like MW, Collatz, PG and Moo is OK too)(and don't forget Donate, the GPUGrid sister project), the tide is moving toward Open_CL. CAL is dead and both AMD and NVIDIA seem to be allocating more of their resources to the common language. Open_CL has recently made large performance strides on both platforms and is getting better with every release. It's just a matter of time IMO.

Regards/Beyond

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28218 - Posted: 26 Jan 2013 | 18:13:25 UTC - in response to Message 28183.

Hi All,

We have no plans for an AMD app right now. As you may recall, we did have an OpenCL build ready some years ago, but the AMD drivers were never stable enough - and performance too low - for us to want to deploy it. Since then, we've worked further on the Nvidia application to improve its features and performance, and it would now be substantial effort to rejuvenate the AMD OpenCL code. Every time there's movement from the AMD camp, I do check it out to see whether things have improved, but realistically, it's unlikely that we'll make any change. Sorry to disappoint.

Plug: If you have an AMD GPU and really want to contribute to GPUGRID activities, we have our sister project http://donateathome.org/.[/url]

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28219 - Posted: 26 Jan 2013 | 18:19:15 UTC - in response to Message 28183.

Dagorath:

If I could put a GPU on a Rasp-Pi I would do it, assuming there was the required app and other pre-requisites. Just enough CPU to drive a whopping GPU... that's the goal.


We build our own systems for our lab that are heavy on GPU and light on CPU. We also designed a custom case to deal with the cooling problems. I'll post more details if anyone's interested.

The cost of a single socket host system isn't too painful when amortised over 4 GPUs, and it's a lot less painful than making custom motherboards..

MJH

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28222 - Posted: 26 Jan 2013 | 19:32:10 UTC - in response to Message 28219.
Last modified: 26 Jan 2013 | 19:35:16 UTC

@MJH

Thanks, I'm very interested in hearing the details. 4 GPUs per motherboard is my goal.

I'll definitely take a look at Donate@home when my 7970 arrives.

icg studio
Send message
Joined: 24 Nov 11
Posts: 3
Credit: 954,677
RAC: 0
Level
Gly
Scientific publications
wat
Message 28228 - Posted: 27 Jan 2013 | 0:13:26 UTC - in response to Message 28219.

More than happy to see this photo of lab hardware.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28231 - Posted: 27 Jan 2013 | 1:19:18 UTC

I'd love to see some pictures too. MJH, you do realize that your goals are different than ours (most of us). While you design for one project, our systems are designed to run many projects with many different needs. We also run CPU projects at the same time, generally on all cores. Projects like POEM need massive CPU support. Other GPU projects need almost none. Most scientific projects are CPU only so we need to be able to support those too. Most of my GPUs are ATI so I ran up 500,000,000 credits on Donate, hopefully it helped a little to fund a budding researcher. But there are so many good projects to support...

Regards/Beyond

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28236 - Posted: 27 Jan 2013 | 10:57:26 UTC - in response to Message 28231.

Off-peak electric is a scam. Most people revert to their old habits, and the electric companies know that - that's why they turned up, and left in shock. I think it's only GPU crunchers that can actually benefit.

Running two same type GPU's can be a real pain. For here it's usually not an issue, but POEM can't even use two same type cards which means complicated settings, outages and hands on project juggling. I concur with MrS' analysis; POEM outages are common and running optimally isn't normal. With no other real bio-medical ATI research projects it makes getting an ATI less attractive. Einstein and Albert don't give much credit and their ATI app is slower than their CUDA app. MW often has outages. Ditto for SETI and donate has more issues than GPUGrid. You can get very high credit from a GeForce 600 series GPU at PEOM and reasonable credit here. So I think if someone wanted a GPU an NVidia has a good shout.

It use to be the case, with previous apps, that high PCIE rates were important to GPUGrid. That situation unexpectedly changed with the 4.2app, and PCIE2 based systems are fine. Its also the case that one CPU is sufficient to fuel the GPUGrid apps. This means that older systems are fine for here. At POEM the opposite is true, you need massive PCIE bandwidth and as many CPU cores/threads as possible. Even an overclocked i7-3770K with high end memory & SSD struggles to fuel a single GTX660Ti with no other projects running.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Evil Penguin
Avatar
Send message
Joined: 15 Jan 10
Posts: 42
Credit: 18,255,462
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 28237 - Posted: 27 Jan 2013 | 11:10:36 UTC

On the bright side...
The Folding@home guys at Stanford have been making progress for AMD cards.
They have a new lead GPU developer.

Latest FAHBench results show that AMD cards are quite capable at processing explicit and implicit WUs (older gen cards and fahcores could only work on implicit WUs).

I just want to be able to contribute my GPUs to projects that focus on disease research and not so heavy on the CPU usage.

Sadly at the moment POEM@home, GPUGRID and even F@h aren't options for me. :(

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,790,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28238 - Posted: 27 Jan 2013 | 11:12:09 UTC - in response to Message 28231.
Last modified: 27 Jan 2013 | 11:15:24 UTC

Projects like POEM need massive CPU support.



POEM needs in first line much RAM Bandwidth ;) i tried that with underclocking and overclocking a dual core CPU. with no in/decrease of GPU Load. But when i changed only some mhz in the ram Clock it de/increased massive.

@ evil penguin: i wish folding@home would go back to BOINC :/

@ skygiven: huh? MW has not really interupted here since months..since they got the latest GPU Problem out.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Crunching for my deceased Dog who had "good" Braincancer..

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28239 - Posted: 27 Jan 2013 | 12:09:11 UTC

Hmmm, maybe AMD wasn't such a sound purchase after all. Seems like there are some options for it however. I couldn't give 2 hoots about the credits so Albert or Einstein will do fine, maybe Folding@home is where it will go. I wouldn't crunch MW or SETI if they were the last projects standing in fact the sooner they blow up and never return the better. I'll put my 7970 to good use somewhere when it arrives. Optimal crunching? Well, it's a worthy goal but life is too short to lose sleep over it. I guess that's easier to say when one gets electricity as cheap as I do.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28244 - Posted: 27 Jan 2013 | 13:28:14 UTC - in response to Message 28239.

Your HD7970 will do fine at POEM, and Einstein, Albert or Folding would be good backup projects. I haven't really looked at Folding for a long time. The last time I was there I think a high end GPU could only match the performance of a high end CPU, which seemed a bit of a waste to me.

I wasn't aware that MW was running so smoothly, but since POEM and Donate it's been largely relegated to a backup project. Not Sky!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28254 - Posted: 27 Jan 2013 | 22:18:07 UTC

> I wouldn't crunch MW or SETI if they were the last projects standing in
> fact the sooner they blow up and never return the better.

What don't you like about MW?

Evil Penguin
Avatar
Send message
Joined: 15 Jan 10
Posts: 42
Credit: 18,255,462
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 28255 - Posted: 27 Jan 2013 | 22:36:29 UTC - in response to Message 28244.
Last modified: 27 Jan 2013 | 22:37:43 UTC

Your HD7970 will do fine at POEM, and Einstein, Albert or Folding would be good backup projects. I haven't really looked at Folding for a long time. The last time I was there I think a high end GPU could only match the performance of a high end CPU, which seemed a bit of a waste to me.

I wasn't aware that MW was running so smoothly, but since POEM and Donate it's been largely relegated to a backup project. Not Sky!

Now that both the GPU and CPU clients can handle the same type of work, GPU WUs are using a "unified GPU/SMP benchmarking scheme".

http://foldingforum.org/viewtopic.php?f=66&t=22808

A GTX 570 was getting over 150k PPD.
Crazy high bump in points.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28256 - Posted: 27 Jan 2013 | 23:44:29 UTC - in response to Message 28254.

> I wouldn't crunch MW or SETI if they were the last projects standing in
> fact the sooner they blow up and never return the better.

What don't you like about MW?


It's run by a dirty skank who steals crunchers away from other projects by paying exhorbitant (understatement) credits. It's a rogue project.

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 28258 - Posted: 28 Jan 2013 | 0:56:29 UTC

Maybe, but I think that the goal of the project is more important than the credits. Furthermore, I would say that MW has a better goal than SETI, because it can be useful, however I view SETI as almost throwing work away looking for sentient alien signals. I would crunch for them if they were the "last" project available, but there are more dire issues that need to be solved.

In conclusion, I think computers can focus on better projects than SETI and MW, however it is not my decision to make, and also SETI has helped to develop distributed computing to where it is now, and without SETI, distributed computing would probably be a lot less popular, or efficient, in terms of the applications used. I hope what I said makes sense, I just wanted to share my views.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28259 - Posted: 28 Jan 2013 | 2:18:16 UTC - in response to Message 28258.
Last modified: 28 Jan 2013 | 2:24:20 UTC

That's the same kind of flawed logic that says "During WWII we developed electric welding equipment and procedures to speed the joining of metals for the purpose of building ships faster, lighter and stronger therefore we should have another war so we can develop even better electric welding equipment and procedures." (Lincoln electric did in fact develop DC welding machines, extruded flux coated electrodes and low-hydrogen welding rod in response to war time ship building needs and is credited in many circles with making D-day possible. Prior to low-hydrogen electrodes ships broke apart at sea due to welds failing from hydrogen embrittlement. Lincoln electric did the research, found the cause of the weld failures and engineered the solution... 7018 welding electrode which is now the workhorse of the welding industry.)

All of the BOINC development work could have (and should have) been done at a project that has a reasonable chance of yielding success. SETI's chances of finding ET is so minute we may as well say it is zero. That is fact, not romantic fiction. If there were no other project in need of donated CPU time then I would say it doesn't matter if CPU time is wasted on SETI but that is not the case. I know of 30 projects who stand a much higher chance of success and their need for donated time is real. If I find time I want to form a group that will lobby NSF and whoever else funds SETI to stop wasting money on SETI so that the CPU time wasted there can be diverted to worthy objectives. Oh I am pretty sure aliens exist; I just know we're never going to find them with SETI's method.

As for MW's goal... their goal is to steal crunchers. A pox on them as well as the credit system that makes it possible for rogue projects like MW to work their evil.

Abolish credits, abolish MW, abolish SETI; those are my goals. No credits no crunchee? That's BS and I'm gonna prove it.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28260 - Posted: 28 Jan 2013 | 10:36:50 UTC - in response to Message 28259.
Last modified: 28 Jan 2013 | 10:41:58 UTC

Alas, this thread has decayed into another credits debate.
The solution is still GPU performance X GPU usage = credits, and the solution to the scheduler is still have a separate GPU scheduler. Both were suggested years ago...

This project utilizes the GPU more than POEM, Albert, Einstein and several other GPU projects. The power draw is higher, and it's CUDA based which means its as complex as it gets. It's sad that less useful projects that don't utilizes the GPU well, perform less complex calculations can pay stupid amounts of credits, but what's really sad is that people crunch for them and think they are achieving something.
You can certainly learn lessons about GPU crunching and develop code, but finding the millionth digit of Pi isn't finding a cure for Cancer, or a drug treatment strategy. I think the banks are getting enough support from us without us spending money finding new primes for them for their high security transactions, while they leave the back doors open and sell our info on to the latest scam artist.

While it's everyone's own choice what they crunch for, and some people do have genuine interest in astronomy, watching others brag about get massive credits for stupid research puts many off crunching altogether.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,790,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28261 - Posted: 28 Jan 2013 | 11:48:27 UTC - in response to Message 28256.
Last modified: 28 Jan 2013 | 11:50:15 UTC

> I wouldn't crunch MW or SETI if they were the last projects standing in
> fact the sooner they blow up and never return the better.

What don't you like about MW?


It's run by a dirty skank who steals crunchers away from other projects by paying exhorbitant (understatement) credits. It's a rogue project.



Hum? A high end ati card there (who uses nvidia there?!) gets only a bit mor points then a high end nvidia card in gpugrid ( youncant use the ati here expect donat@home). So i dont think they steal cruncher away. And i think it is better to run MW then prime or collaz ;) due its opencl 1.0 one of my 4850 cards can crunch only mw as the only science project ( expect ....... Seti.....). I would through it away before calculating on prime or collaz ;)
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Crunching for my deceased Dog who had "good" Braincancer..

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28262 - Posted: 28 Jan 2013 | 12:58:39 UTC - in response to Message 28260.

Alas, this thread has decayed into another credits debate.
The solution is still GPU performance X GPU usage = credits, and the solution to the scheduler is still have a separate GPU scheduler. Both were suggested years ago...


If the solution really were that simple it would have been implemented long ago. The stumbling block is measuring GPU performance. The stumbling block for CPU related credits is the same... measuring CPU performance. They tried it and there were glaring anomalies and inconsistencies that nobody was happy with. And on top of all that any client side measure of performance is open to cheating.

This project utilizes the GPU more than POEM, Albert, Einstein and several other GPU projects. The power draw is higher, and it's CUDA based which means its as complex as it gets. It's sad that less useful projects that don't utilizes the GPU well, perform less complex calculations can pay stupid amounts of credits, but what's really sad is that people crunch for them and think they are achieving something.


It's the lemming mentality. People are told "this is how you do it... you get the credits then you put them in a big annoying sig and then your wretched pointless life suddenly has quantified meaning". To be accepted into the herd they don't question they just act like all the other water buffalo and chant the mantra "moo, moo, I cannot crunch without my credits, moo, moo".

You can certainly learn lessons about GPU crunching and develop code, but finding the millionth digit of Pi isn't finding a cure for Cancer, or a drug treatment strategy. I think the banks are getting enough support from us without us spending money finding new primes for them for their high security transactions,


I believe you're referring to the drug money laundering so many of them have been implicated in recently? Or is it their promoting war and conflict so they can rake in huge profits financing those wars that irks you?

I kind of favour cancer and AIDS research too but the counter argument to that is that if people were more careful about what they put in their mouths and where they put their willys there would be far less cancer and AIDS. I hate sticking my nose into other peoples' affairs but they make it my business when they expect my tax dollar to bail them out of trouble.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28263 - Posted: 28 Jan 2013 | 13:03:12 UTC - in response to Message 28261.
Last modified: 28 Jan 2013 | 13:07:03 UTC

> I wouldn't crunch MW or SETI if they were the last projects standing in
> fact the sooner they blow up and never return the better.

What don't you like about MW?


It's run by a dirty skank who steals crunchers away from other projects by paying exhorbitant (understatement) credits. It's a rogue project.



Hum? A high end ati card there (who uses nvidia there?!) gets only a bit mor points then a high end nvidia card in gpugrid ( youncant use the ati here expect donat@home). So i dont think they steal cruncher away.


It sounds like they finally mended their ways. Thanks for the info. Well, as long as that skank who started their high credit policy is on MW staff I won't crunch there. Anybody who steals crunchers the way she used to isn't very intelligent in my books so I doubt her research can have any merit.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28268 - Posted: 28 Jan 2013 | 20:05:47 UTC
Last modified: 28 Jan 2013 | 20:11:10 UTC

Don't want to be argumentative but as someone who's run pretty much all of the major GPU projects (a lot and for a long time), in the earlier days MW credits were (and are) slightly higher than Collatz which was arguably because they were using double precision (which is why NVidia doesn't do so well at MW). MW credits were and are well below those in Donate, DistrRTgen, POEM, Moo and the early days of PrimeGrid (until PG lowered theirs).

Edit: I see credits in the same light as playing a game. Only the object is to collect as many credits as possible. Adds a flavor of competition and helps many science projects. The difference as opposed to other games is that you're helping to promote science. A neat idea if you ask me...

Profile Stoneageman
Avatar
Send message
Joined: 25 May 09
Posts: 215
Credit: 16,735,639,080
RAC: 17
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28269 - Posted: 28 Jan 2013 | 21:43:07 UTC

'World Community Grids' Help conquer cancer project uses AMD & Nvidia gpus. It's due to finish in about 5 months. An HD7970, using an app_config file, can get 140,000-150,000 boinc points/day. A GTX580 will only get ~25,000.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28272 - Posted: 28 Jan 2013 | 22:59:36 UTC - in response to Message 28269.
Last modified: 28 Jan 2013 | 23:01:20 UTC

It is a Bio-science project and after POEM probably the best place to aim your ATI cards. 150K isn't bad (probably better than Einstein and Albert) but you would get far more at POEM; ~1M for an HD7970 (with the right setup)!
I would take the 5 months remaining lifespan with a pinch of salt though, that crystallography project has been around as long as some hills.
I agree that it's certainly not for NVidia cards though; it actually started off as a CUDA project and was later adapted to OpenCL. I think I worked out that two GT240's would match a GTX580 due to the CPU time. Horses for courses.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28273 - Posted: 29 Jan 2013 | 0:05:44 UTC - in response to Message 28268.

Don't want to be argumentative


Well, why not? We can argue and still be friends.

but as someone who's run pretty much all of the major GPU projects (a lot and for a long time), in the earlier days MW credits were (and are) slightly higher than Collatz which was arguably because they were using double precision (which is why NVidia doesn't do so well at MW). MW credits were and are well below those in Donate, DistrRTgen, POEM, Moo and the early days of PrimeGrid (until PG lowered theirs).


The exhorbitant credit rates I mentioned were prior to their providing a GPU app. They may well have changed their ways before bringing GPUs on stream, I'm not sure. I quit the project in disgust and have never taken a second look at them. Ok, I admit I hold a grudge perhaps longer than I ought to but I recall the blatant lies and deceit they used to justify what they were doing and I despise them for it.

Edit: I see credits in the same light as playing a game. Only the object is to collect as many credits as possible. Adds a flavor of competition and helps many science projects. The difference as opposed to other games is that you're helping to promote science. A neat idea if you ask me...


It's a good analogy except in a game of Monopoly, for example, there are no consequences outside the game. In crunching there are real life consequences outside the credit game and some of those are negative for people who don't deserve to be abused. For example, the pursuit of credits recently at Oproject@home led one volunteer (perhaps more, not sure) to cheat in a way that injected thousands of bogus results into the project's result database jeopardising the research and possibly making the entire batch of results worthless, depends if the admin can filter out the bogus results. I have many hundreds of hours worth of results in that database and I am not pleased that one credit whore twit has risked all that. Indeed one can argue that project admins/devs should take measures against such horrible actions but not every project has the time and manpower to do that. One of BOINC's reasons for existing is to assist small projects just like Oproject, not put them in harm's way.

Another negative consequence is that projects do what MW and others do and steal volunteers away by doling out high credits. There will always be competition between researchers for funding but I will not allow that to creep into the BOINC world if I can do anything about it. I want BOINC to be a safe haven for projects where they don't have to compete that way, where they are not at risk, where they can devote their limited resources to their research.

You see I am all in favour of people having fun and doing what they want as long as it doesn't hurt anybody else. To me it seems like credits have a lot of negative consequences and I just don't think the benefits derived from credits outweigh the bad aspects. I do not accept the hypothesis that tons of crunchers will stop crunching if credits are abolished and I intend to prove that hypothesis is wrong by conducting a poll to gauge the community's thoughts on the matter. Once that information is known we will proceed accordingly to stop the crap that is going on as a result of this totally broken institution called credits. I wish it would work, I genuinely do, but I think the evidence suggests it cannot work, not ever, too much entropy in the system. Again, if there were no bad consequences I would say let it be and let the credit whores have their fun but it's hurting people who don't deserve to be hurt.

We're feeding all of this, slowly, to the man himself... DA.... and so far it seems like he is ready to maybe, said maybe, make a move on this though there is much to negotiate and common ground still to be found. He does not like what's going on either but feels somewhat powerless to stop it, maybe even feels it's not his place to stop it, you know, he and Rom bust their butts just to provide the software, a monumental task in itself and I think they don't have time to regulate its use. Should it be regulated? More important, can it be regulated? Depends who is willing to take the matter in hand and take the power, hmmm?. Like a good game of Euchre take the power if you think you can.

Profile Mumps [MM]
Send message
Joined: 26 Mar 09
Posts: 1
Credit: 157,833,736
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28315 - Posted: 31 Jan 2013 | 7:03:06 UTC - in response to Message 28218.

Hi All,

We have no plans for an AMD app right now. As you may recall, we did have an OpenCL build ready some years ago, but the AMD drivers were never stable enough - and performance too low - for us to want to deploy it. Since then, we've worked further on the Nvidia application to improve its features and performance, and it would now be substantial effort to rejuvenate the AMD OpenCL code.


I don't understand this. What is "Nvidia application"? Do you mean CUDA? I though both nVidia and AMD both depreciated their CUDA and CAL, in favor of OpenCL. Furthermore, even if CUDA is not yet dead, it surely will be eventually. Time to stop optimizing a dead end, and make OpenCL work. Or there will be no cards that can crunch GPUGRID in the end.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28323 - Posted: 31 Jan 2013 | 21:07:41 UTC - in response to Message 28315.

Nope. AMD left CAL in favor for OpenCL, whereas nVidia does OpenCL because they have to. They prefer CUDA and put as much weight behind it as they can. Currently it's much better than OpenCL and it's got more potential regarding optimization for nVidias own hardware. These are substantial advantages over OpenCL, which itself obviously has the advantage of "running everywhere". But how good does it run compared to optimized software

I expect CUDA to keep healthy at least a few more years. And if GPU-Grid suffers from not having access to AMDs (or whatever else may pop up using OpenCL or even other stuff) the scientists will have more time, since they're not as busy any more doing actual science work. At that point they'd be dumb not to extend their app to other platforms.

Well, and one could always argue that more speed is always better and may help you to get along with sloppy coding and less optimization, speeding up the scientific progress along the way. However, this shouldn't be considered normal for something deployed at the scale of BOINC.

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : AMD GPU Status for 2013?

//