Advanced search

Message boards : Number crunching : Credit per € / $

Author Message
ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8566 - Posted: 18 Apr 2009 | 14:27:32 UTC
Last modified: 29 Apr 2009 | 20:53:39 UTC

last update: 29th of April 2009

A list compiled by loki that gives you an overview of the supported Nvidia cards and their €/Credit and $/Credit rating.

The credit entity is GFLOPS from wikipedia.de. € prices from idealo.de, US$ from pricegrabber.com.

Prices: Cheapest found incl. tax and excl. shipping. Check your local prices as they vary alot, often from day to day.
Prices in [] are from [ebay]: Buy It Now, incl. shipping and only included if 10€/$ cheaper than shops.

Exchange rate 16 Apr: EURUSD=X 1,3171 - 1€ = 1,32$; USDEUR=X 0,7593 - 1$ = 0.76€

Power consumption and minimum system power requirement from nvidia.com

---------------------------------------------------------------------------------------------------------------------------------------
G92, G92b, G94, G94b

Model - - - - - - - - - - - GFLOPS - - - - - € - - - - - GFLOPS/€ - - - - - $ - - - - - - GLOPS/$ - - - - Load - - - req. Power Supplie
Geforce 8800 GT - - - - - -504 - - - - - - 62€ - - - - - - 8.13 - - - - - $99[80] - - - - - 5.10[6.30] - - - 105 W - - 400 W
Geforce 8800 GTS(512) - 624 - - - - - 117€ - - - - - - 5.33 - - - - - $159[90] - - - - 3.92[6.93] - - - 140 W - - 450 W

Geforce 9600 GSO 512 - 234 - - - - - - 70€ - - - - - - -3.34 - - - - - -$80 - - - - - - 2.93 - - - - - - - - 90 W - - 400 W
Geforce 9600 GT - - - - - -312 - - - - - - 71€ - - - - - - -4.39 - - - - - -$75 - - - - - - 4.16 - - - - - - - - 59 W - - 300 W
Geforce 9600 GSO - - - - 396 - - - - - - 80€ - - - - - - -4.95 - - - - - -$80[70] - - - 4.95[5.66] - - - - 105 W - - 400 W
Geforce 9800 GT - - - - - -508 - - - - - - 87€ - - - - - - -5.84 - - - - - -$99 - - - - - - 5.13 - - - - - - - - 105 W - - 400 W
Geforce 9800 GTX - - - - -648 - - - - - - 126€[107] - - 5.14[6.01] - -$135 - - - - - 4.8 - - - - - - - - - 140 W - - 450 W
Geforce 9800 GX2 - - - -1052 - - - - - - 250€[233] - - 4.21[4.52] - $250[205] - - 4.21[5.13] - - - - 197 W - - 580

Geforce GTS 250 - - - - - 705 - - - - - - 100€ - - - - - - 7.05 - - - - - $137 - - - - - -5.15 - - - - - - - - 150 W - - 450 W

(note:
- 9800GTX+ is similar to GTS 250
- 8800GS is similar to 9600GSO 384 MB)


GT200, GT200b - optimization bonus 41% **1

Model - - - - - - - - - - - GFLOPS(+41%) - - - € - - - GFLOPS/€ - - - - - $ - - - - - - - - GLOPS/$ - - - - - - - - - - - Load - - - req. Power Supplie
Geforce GTX 260 - - - - - 715.4(1009) - - - 141€ - - 5.07(7.15) - - - - $179 - - - - - - 4.00(5.64) - - - - - - - - - - - - 182 W - - 500 W
Geforce GTX 260(216) - - 805(1135) - - - - 156€ - - 5.16(7.28) - - - - $189 - - - - - - 4.26(6.00) - - - - - - - - - - - -190 W - - 500 W
Geforce GTX 275 - - - - - 1010.9(1424) -- - 212€ - - 4.77(6.73) - - - - $250 - - - - - - 4.04(5.70) - - - - - - - - - - - -219 W - - 550 W
Geforce GTX 280 - - - - - 933.1(1316) - - - 289€ - - 3.23(4.55) - - - - $265 - - - - - - 3.52(4.96) - - - - - - - - - - - - 236 W - - 550 W
Geforce GTX 285 - - - - - 1062.7(1498) -- - 287€ - - 3.70(5.22) - - - - $340[330] - - 3.13(4.41)[3.22/4.54] - - - - 204 W - - 550 W
Geforce GTX 295 - - - - - 1788.4(2522) -- - 406€ - - 4.40(6.20) - - - - $533[510] - - 3.36(4.74)[3.51/4.95] - - - - 289 W - - 680 W


Nvidia Tesla

C1060 Computing Processor - - 1936(2730) - - 1508€ - - 1.28(1.80) - - - $1300 - - - 1.49(2.10) - - - - - - - - - - 188 W - - 500 W
S1070 1U Computing Server - - 4320(6091) - - 6682€ - - 0.65(0.91) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 800 W


The 100 series is not available for individual purchase.
---------------------------------------------------------------------------------------------------------------------------------------

**1
Let's put some numbers in here and compare these 2 WUs (1 & 2) with pretty similar names:

1. Me with 9800GTX+: 89917 s, 1944 MHz, 128 shaders -> 746.5 GFlops
2. GTX 260 Core 216: 48517 s, 1512 MHz, 216 shaders -> 979.8 GFlops

-> I need 1.853 times as long with 0.761 times the GFlops. That means for this WU each "GT200-Flop" is worth 1.41 "G92-Flops", or put another way: GT200 is 41% faster per clock.

ExtraTerrestrial Apes wrote:

1. The speedup of G200:
- GDF just said 10 - 15%
- based on fractals numbers it's ~90%
- when the G200 optimizations were introduced I estimated >30% performance advantage for G200 at the same theoretical FLOPS
- it may well be that the G200-advantage is not constant and scales with WU sizes, which may explain why we're seeing a much higher

advantage now than in the past (with smaller WUs)
- my 9800GTX+(705GFLOPS) was good for ~6400 RAC, whereas GTX 260(805GFLOPS) made 10 - 11k RAC prior to the recent credit adjustments
- G200 is more future proof than G92. Both are DX 10.0, but G200 has additional features which may or may not be needed for future CUDA

clients.

-> concluding these observations I can say that the advantage of G200-based cards is >50% at the same theoretical FLOPS.


Would still be nice if someone could provide such performance numbers for other WUs.
____________
Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 8567 - Posted: 18 Apr 2009 | 14:58:24 UTC

And some more points to consider:

- the GT200 based cards have a higher hardware capability level, i.e. they can run future code which the G9x series can not run [it is unknown at this point, if GPU-Grid will require this cability in the future]

- more smaller cards are not always better than fewer high end cards, even if the "purchase cost per €/$" is better: there is a certain overhead required to run a GPU, i.e. you need to provide a PC and a PCIe slot. So if you go for several 9600GSOs instead of GTX 260s you'll need about 3 times a much supporting PC hardware, which adds to the power bill and may add to the purchase cost.

- smaller cards do not always consume proportionally less power: e.g. the GTS 250 looks good in flops/€, but consumes about as much power as the (much faster) GTX 260

- under GPU-Grid the cards consume much less power than the typical power draw quoted by nVidia. As a rough guideline: the additional power draw from the wall for a GTX 280 has been measured at ~130W and a GTX 260 at ~100W.

- I'm not being paid by NV for this or own any of their stock, I just want to help people make smart decisions if they're going to get a new GPU :)

MrS
____________
Scanning for our furry friends since Jan 2002

Martin Chartrand
Send message
Joined: 4 Apr 09
Posts: 13
Credit: 17,030,367
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9401 - Posted: 6 May 2009 | 21:40:41 UTC - in response to Message 8567.

I cannot find the chart anymore but on it my 8800GTX is said to have a core G80 so not good for using GPU but..
It was crunching GPU. Any particular reasons of this behavior?

Martin

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9448 - Posted: 7 May 2009 | 20:24:17 UTC - in response to Message 9401.

During the first months (about 3?) G80 did run GPU-Grid with a separate code path, to work around the missing features. Later CUDA versions broke something.

MrS
____________
Scanning for our furry friends since Jan 2002

Martin Chartrand
Send message
Joined: 4 Apr 09
Posts: 13
Credit: 17,030,367
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9449 - Posted: 7 May 2009 | 20:30:44 UTC - in response to Message 9448.
Last modified: 7 May 2009 | 20:31:40 UTC

Aw ok thanks a lot.
I now run GTX285.
For the hardware/software maniacs out there, in my control panel of nvidia I added seti@home enhanced 6.08 and BOINC.exe and GPUGRID
Can you actually tweak the NVIDIA control panel to maximize those 3 program or that is completely irrelevant?

Martin

Martin Chartrand
Send message
Joined: 4 Apr 09
Posts: 13
Credit: 17,030,367
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9450 - Posted: 7 May 2009 | 20:36:24 UTC - in response to Message 9449.

Hmm Not a thread about this.
Should I start a new thread ExtraTerrestrial Apes about maximizing software through the NVIDIA Control panel?

Martin

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 9523 - Posted: 9 May 2009 | 11:05:29 UTC - in response to Message 9450.

Not sure what you mean by maximising, but this is certainly the wrong thread for that. Generally there's nothing the control panel could do for CUDA. Maybe if your card runs GPU-Grid in 2D-Mode (check clocks with GPU-Z), but this is not generally the case (switches to 3D clocks automatically).

MrS
____________
Scanning for our furry friends since Jan 2002

Jonathan Figdor
Send message
Joined: 8 Sep 08
Posts: 14
Credit: 425,295,955
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10621 - Posted: 17 Jun 2009 | 5:34:01 UTC - in response to Message 9523.

Can we update this? I have a friend building a new PC and trying to figure out what card to get for part gaming, part crunching $100-300.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 10626 - Posted: 17 Jun 2009 | 13:52:55 UTC - in response to Message 10621.

Can we update this? I have a friend building a new PC and trying to figure out what card to get for part gaming, part crunching $100-300.

What is needed to be updated?

The only part that changes is the prices ... find the most productive card for the money he wants to spend. For that range any of the 200 series cards is possible.

I have 260, 280 and 295 cards and there is not a huge difference in the throughput on these cards... though there is a detectable improvement as you go up in capacity ... any will be decent performers...

Jonathan Figdor
Send message
Joined: 8 Sep 08
Posts: 14
Credit: 425,295,955
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10629 - Posted: 17 Jun 2009 | 15:55:20 UTC - in response to Message 10626.

Which is best bang for buck? 275 or 260 core 216? Or does it make sense to scale up to 285/295? Should he wait for GT300 architecture?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 10632 - Posted: 17 Jun 2009 | 20:14:03 UTC - in response to Message 10629.

We can't be sure about GT300 yet. It looks like an expensive monster.. certainly impressive, but bang for the buck we can not assess (yet).

Otherwise.. yes, we could update this. Just give me the update numbers and i'll put them in :)
Sorry, don't have time to search for them myself.

MrS
____________
Scanning for our furry friends since Jan 2002

Skip Da Shu
Send message
Joined: 13 Jul 09
Posts: 63
Credit: 1,997,670,165
RAC: 10,572,372
Level
His
Scientific publications
watwatwatwatwatwatwat
Message 11128 - Posted: 13 Jul 2009 | 6:25:35 UTC
Last modified: 13 Jul 2009 | 6:34:09 UTC

I found a couple posts where a person was saying their vid card could not meant the GPUGrid WU deadlines but he doesn't say what card that is.

Does a 9600GT have sufficient power to finish the WUs in time?
____________
- da shu @ HeliOS,
"A child's exposure to technology should never be predicated on an ability to afford it."

Profile Bymark
Avatar
Send message
Joined: 23 Feb 09
Posts: 30
Credit: 5,897,921
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwat
Message 11129 - Posted: 13 Jul 2009 | 8:17:26 UTC - in response to Message 11128.

I found a couple posts where a person was saying their vid card could not meant the GPUGrid WU deadlines but he doesn't say what card that is.

Does a 9600GT have sufficient power to finish the WUs in time?


Yes, 9600GT can, but slow, 28 huors+ for a 93-GIANNI one....

<core_client_version>6.4.7</core_client_version>
<![CDATA[
<stderr_txt>
# Using CUDA device 0
# Device 0: "GeForce 9600 GT"
# Clock rate: 1600000 kilohertz
# Total amount of global memory: 536543232 bytes
# Number of multiprocessors: 8
# Number of cores: 64
MDIO ERROR: cannot open file "restart.coor"
# Time per step: 204.018 ms
# Approximate elapsed time for entire WU: 102008.765 s
called boinc_finish

</stderr_txt>
]]>


____________
"Silakka"
Hello from Turku > Åbo.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 12196 - Posted: 29 Aug 2009 | 0:00:15 UTC - in response to Message 8566.
Last modified: 29 Aug 2009 | 0:12:26 UTC

Would still be nice if someone could provide such performance numbers for other WUs.


Recently replaced a Palit GTS 250 with a Palit GTX 260 (216), so I have some performance numbers. Details & Specs:

The GTX260 has two fans rather than the one on the GTS250. Although the GTX260 is louder, the temperature is a bit less; 71 °C (running GPUGrid) rather than 76°C.
The GTX260 is a bit longer too, but both would make a BIOS reset awkward, so no messing.
The 250 has VGA, DVI and HDMI, but is only 1.1 Compute Capable (CC); using the G92 core.
The 1.3 CC (G200 core) GTX260 only has 2 DVI ports, but I have a DVI to HDMI converter and a DVI to VGA adapter, should I ever need them.
Although the GTX260 has 27 Multiprocessors and 216 Shaders, compared to the GTS250’s 16 Multiprocessors and 128 Shaders, my system’s power usage is surprisingly similar, perhaps even slightly less for the GTX260! Surprising until I looked at the clock rates; GTS250 1.85GHz, GTX260 1.35GHz!
http://www.techpowerup.com/gpuz/6y4yp/
http://www.techpowerup.com/gpuz/hkf67/

Apart from changing the cards, the system is identical and Granted credit was the same for both WU’s (5664.88715277777):

On the GTS250 I completed the Work Unit 48-GIANNI_BINDTST001-7-100-RND7757_2 in 53268.082 s

On the GTX260 I completed the Work Unit 334-GIANNI_BIND001-7-100-RND2726_1 in 31902.258 s

The GTS250 has a Boinc GFlops rating of 84 while the GTX260 is 104, which would make the GTX almost 20% faster, going by the Boinc GFlops rating.

However, the similar work unit did not complete in 80% of the time it took the GTS250 (which would have been 42614.4656 sec); it completed it in 59.89% of the time. So to turn that around the new card was between 40 and 41% faster overall, and about 20% faster than Boinc predicted with its GFlops rating (Explained by having the better 1.3CC G200 core, compared to the 1.1CC G92 core of the GTS250).

So for the above conditions, I would say the GTX260 has a rating of 104 (125) Boinc GFlops, with the number in the brackets representing the 1.3 CC/G200 comparable value (or a 1.2 correction factor; +20%) to a 1.1CC G92 core, that was rated at 104 Boinc GFlops.

Perhaps these results vary with different tasks and cards?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 12206 - Posted: 29 Aug 2009 | 14:34:30 UTC - in response to Message 12196.

Thanks for reporting. Your numbers actually exactly confirm what I found, you just got it wrong at the end ;)

GTS250: 84 BOINC GFlops, 53268s
GTX260: 104 BOINC GFlops, 31902s

Based on the GFlopfs rating the GTX 260 should have needed 53268s * 84/104 = 43024s. Real performance is 43024 / 31902 = 1.35 times (= +35%) faster per GFlop. So GTX 260 would deserve a BOINC GFlops rating of 140 to represent this. These 35% are comfortably close to the 41% I determined earlier this year :)

BTW: I don't recommend using BOINC GFlop ratings, as they depend on a totally arbitrary choice of code. Right now this rating scales linearly with theoretical maximum GFlops, but the numbers could change anytime, as soon as the Devs decide to use different code. On the other hand the theoretical maximum values (times correction factors, which may depend on driver and app versions) don't change over time.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 14814 - Posted: 30 Jan 2010 | 0:16:27 UTC

It the math is done, it's a question of getting the max credit per $/€ invested + the cost of powering the beast. I'm not in this for the credits, nor do I have any special interest, other than hoping that I can help the collective, because I might need some helping out myself someday.

If the efficiency of BOINC can be increased w/o running the risk of getting tonnes of error, that would help out even more.

I try to focus on Projects related to medical studies, entertainment, & non ET research. I really hope that it's not just a bunch of waste in regards to money & power. If I wanted that, I'd rather use my PC's for games.
____________

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 14819 - Posted: 30 Jan 2010 | 5:05:18 UTC - in response to Message 8566.

last update: 29th of April 2009

A list compiled by loki that gives you an overview of the supported Nvidia cards and their €/Credit and $/Credit rating.

The credit entity is GFLOPS from wikipedia.de. € prices from idealo.de, US$ from pricegrabber.com.

Prices: Cheapest found incl. tax and excl. shipping. Check your local prices as they vary alot, often from day to day.
Prices in [] are from [ebay]: Buy It Now, incl. shipping and only included if 10€/$ cheaper than shops.

Exchange rate 16 Apr: EURUSD=X 1,3171 - 1€ = 1,32$; USDEUR=X 0,7593 - 1$ = 0.76€

Power consumption and minimum system power requirement from nvidia.com

---------------------------------------------------------------------------------------------------------------------------------------
G92, G92b, G94, G94b

Model - - - - - - - - - - - GFLOPS - - - - - € - - - - - GFLOPS/€ - - - - - $ - - - - - - GLOPS/$ - - - - Load - - - req. Power Supplie
Geforce 8800 GT - - - - - -504 - - - - - - 62€ - - - - - - 8.13 - - - - - $99[80] - - - - - 5.10[6.30] - - - 105 W - - 400 W
Geforce 8800 GTS(512) - 624 - - - - - 117€ - - - - - - 5.33 - - - - - $159[90] - - - - 3.92[6.93] - - - 140 W - - 450 W

Geforce 9600 GSO 512 - 234 - - - - - - 70€ - - - - - - -3.34 - - - - - -$80 - - - - - - 2.93 - - - - - - - - 90 W - - 400 W
Geforce 9600 GT - - - - - -312 - - - - - - 71€ - - - - - - -4.39 - - - - - -$75 - - - - - - 4.16 - - - - - - - - 59 W - - 300 W
Geforce 9600 GSO - - - - 396 - - - - - - 80€ - - - - - - -4.95 - - - - - -$80[70] - - - 4.95[5.66] - - - - 105 W - - 400 W
Geforce 9800 GT - - - - - -508 - - - - - - 87€ - - - - - - -5.84 - - - - - -$99 - - - - - - 5.13 - - - - - - - - 105 W - - 400 W
Geforce 9800 GTX - - - - -648 - - - - - - 126€[107] - - 5.14[6.01] - -$135 - - - - - 4.8 - - - - - - - - - 140 W - - 450 W
Geforce 9800 GX2 - - - -1052 - - - - - - 250€[233] - - 4.21[4.52] - $250[205] - - 4.21[5.13] - - - - 197 W - - 580

Geforce GTS 250 - - - - - 705 - - - - - - 100€ - - - - - - 7.05 - - - - - $137 - - - - - -5.15 - - - - - - - - 150 W - - 450 W

(note:
- 9800GTX+ is similar to GTS 250
- 8800GS is similar to 9600GSO 384 MB)


GT200, GT200b - optimization bonus 41% **1

Model - - - - - - - - - - - GFLOPS(+41%) - - - € - - - GFLOPS/€ - - - - - $ - - - - - - - - GLOPS/$ - - - - - - - - - - - Load - - - req. Power Supplie
Geforce GTX 260 - - - - - 715.4(1009) - - - 141€ - - 5.07(7.15) - - - - $179 - - - - - - 4.00(5.64) - - - - - - - - - - - - 182 W - - 500 W
Geforce GTX 260(216) - - 805(1135) - - - - 156€ - - 5.16(7.28) - - - - $189 - - - - - - 4.26(6.00) - - - - - - - - - - - -190 W - - 500 W
Geforce GTX 275 - - - - - 1010.9(1424) -- - 212€ - - 4.77(6.73) - - - - $250 - - - - - - 4.04(5.70) - - - - - - - - - - - -219 W - - 550 W
Geforce GTX 280 - - - - - 933.1(1316) - - - 289€ - - 3.23(4.55) - - - - $265 - - - - - - 3.52(4.96) - - - - - - - - - - - - 236 W - - 550 W
Geforce GTX 285 - - - - - 1062.7(1498) -- - 287€ - - 3.70(5.22) - - - - $340[330] - - 3.13(4.41)[3.22/4.54] - - - - 204 W - - 550 W
Geforce GTX 295 - - - - - 1788.4(2522) -- - 406€ - - 4.40(6.20) - - - - $533[510] - - 3.36(4.74)[3.51/4.95] - - - - 289 W - - 680 W


Nvidia Tesla

C1060 Computing Processor - - 1936(2730) - - 1508€ - - 1.28(1.80) - - - $1300 - - - 1.49(2.10) - - - - - - - - - - 188 W - - 500 W
S1070 1U Computing Server - - 4320(6091) - - 6682€ - - 0.65(0.91) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 800 W


The 100 series is not available for individual purchase.
---------------------------------------------------------------------------------------------------------------------------------------

**1
Let's put some numbers in here and compare these 2 WUs (1 & 2) with pretty similar names:

1. Me with 9800GTX+: 89917 s, 1944 MHz, 128 shaders -> 746.5 GFlops
2. GTX 260 Core 216: 48517 s, 1512 MHz, 216 shaders -> 979.8 GFlops

-> I need 1.853 times as long with 0.761 times the GFlops. That means for this WU each "GT200-Flop" is worth 1.41 "G92-Flops", or put another way: GT200 is 41% faster per clock.

ExtraTerrestrial Apes wrote:

1. The speedup of G200:
- GDF just said 10 - 15%
- based on fractals numbers it's ~90%
- when the G200 optimizations were introduced I estimated >30% performance advantage for G200 at the same theoretical FLOPS
- it may well be that the G200-advantage is not constant and scales with WU sizes, which may explain why we're seeing a much higher

advantage now than in the past (with smaller WUs)
- my 9800GTX+(705GFLOPS) was good for ~6400 RAC, whereas GTX 260(805GFLOPS) made 10 - 11k RAC prior to the recent credit adjustments
- G200 is more future proof than G92. Both are DX 10.0, but G200 has additional features which may or may not be needed for future CUDA

clients.

-> concluding these observations I can say that the advantage of G200-based cards is >50% at the same theoretical FLOPS.


Would still be nice if someone could provide such performance numbers for other WUs.


Efficiency & total potential is a factor, but what about initial cost of purchase vs running cost? That prices swing & that some can be lucky as to get a good deal on a GPU, & that the price of power is different from country to country. That some of use use a 85+ PSU & if it's winter & the beast warms up the room, is also a factor.

____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14844 - Posted: 30 Jan 2010 | 22:22:48 UTC - in response to Message 14819.

There are lots of factors to consider. Initial purchase costs, running costs, actual contribution (points being a good indicator), reliability, even aesthetics perhaps, certainly noise, heat and other unwanted/wanted side effects!
Until the new NVidia GPUs are released the most efficient is the GT 240. It is fairly inexpensive to purchase, cheap to run and delivers excellent performance given its small power consumption. It does not require any extra power connectors and can fit most standard cases. It is also very reliable!
Tomorrow, my GT 240 will be migrating from an Opteron @ 2.2 GHz to an i7 at a presently non-determined frequency. Should see a bit more.

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 14848 - Posted: 31 Jan 2010 | 0:19:43 UTC - in response to Message 14844.

24/7 operation causing a shortening of life span, although that 3 years will not be an issue, if it's credits you'd want to generate. Mine's factory OC'd plus slightly more. That 1 PC running several GPU's is overall more efficient than several PC's running just one GPU.

I haven't checked out the GT 240, but does that OC well?

That GPUGRID.net doesn't run SLI or Crossfire & that mixing different GPU's (as far as this N00B understands), is not an issue.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14907 - Posted: 1 Feb 2010 | 18:48:50 UTC - in response to Message 14848.

Just moved the GT 240 to an i7, to see how much it benefits from the added CPU power. I have not tried to OC it yet, as I was mainly concerned with reliability. I did underclock the opteron from 2.2GHz down to about 800MHz and the time to complete tasks dropped significantly for the GPU, so giving it faster CPU support should reduce task turnover times. Its low temperatures should allow it to clock reasonable well. I will try to OC it in a few days, after I have a better i7 heatsink, as the i7 is running too hot and could interfere with the overclocking the GPU. Its also good to get some stock results in first.

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15704 - Posted: 12 Mar 2010 | 7:42:01 UTC

Is the GT240 still the best bang for the buck? I'm looking at a 9800GT "Green", which has 112 Stream Processors, a 550Mhz Core Clock, no PCIe power connector (like the GT240). But I'm also looking at the GTS240 which I "suspect", is just a 9800GT "Rebranded Non-Green AKA Original 8800GT/9800GT" version of the 9800GT "Green". The GTS240 also has the missing PCIe power connector & a 675Mhz Core Clock, instead of the "Green" 550Mhz Core Clock. BTW, what makes most sense? The GT240 with 96 Stream Processors a 550Mhz Core Clock - with the potential of a higher OC, or the 9800GT "Green" - with 112 Stream Processors - but a possible lower OC potential???
____________

Profile [AF>Libristes] Dudumomo
Send message
Joined: 30 Jan 09
Posts: 45
Credit: 425,620,748
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15712 - Posted: 12 Mar 2010 | 15:57:08 UTC - in response to Message 15704.

I will say that the number of SP more important than the freq. (Obviously we have to check both) but in such a difference, I will prefer a 9800GT.

But Liveonc, I really recommend you not to buy a G92 anymore, but to go for a GT200 !
GTX260 is around 90-100€.
Or the best GPU is, IMO the GTX275...(150€). But I bought mine at this price in september I guess..and the price is still around 150€...(GTX260 is may be know more worthy)

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 15715 - Posted: 12 Mar 2010 | 17:07:17 UTC - in response to Message 15712.

I will say that the number of SP more important than the freq. (Obviously we have to check both) but in such a difference, I will prefer a 9800GT.

But Liveonc, I really recommend you not to buy a G92 anymore, but to go for a GT200 !
GTX260 is around 90-100€.
Or the best GPU is, IMO the GTX275...(150€). But I bought mine at this price in september I guess..and the price is still around 150€...(GTX260 is may be know more worthy)


But I do like the G200, but I can find a single one that I can fit into a single slot nano-BTX casing, nor the micro-ATX HTPC casing. I've got 3 GTX260-216, & I do love them! Even more then the GTX275, maybe because I'm so cheap.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17592 - Posted: 13 Jun 2010 | 11:23:26 UTC

Guys..

Neither SPs nor frequency is more important, it's the product of both which decides performance (everything else being equal). BTW it's mainly the shader clock, not the core clock.

And the GT240 is CUDA hardware capability level 1.2, so is >40% faster per flops (flops = # of shaders * frequency * operations per clock per shader) than the old CUDA hardware 1.1 chips like G92. Completely forget about buying GTS240/250 or 9800GT Green for GPU-Grid!

And, sure, GT200 cards are faster than a single GT240. but the latter is more efficient due to its 40 nm process compared to the older 65 and 55 nm chips.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile liveonc
Avatar
Send message
Joined: 1 Jan 10
Posts: 292
Credit: 41,567,650
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwat
Message 17593 - Posted: 13 Jun 2010 | 11:41:08 UTC - in response to Message 17592.
Last modified: 13 Jun 2010 | 12:07:40 UTC

Guys..

Neither SPs nor frequency is more important, it's the product of both which decides performance (everything else being equal). BTW it's mainly the shader clock, not the core clock.

And the GT240 is CUDA hardware capability level 1.2, so is >40% faster per flops (flops = # of shaders * frequency * operations per clock per shader) than the old CUDA hardware 1.1 chips like G92. Completely forget about buying GTS240/250 or 9800GT Green for GPU-Grid!

And, sure, GT200 cards are faster than a single GT240. but the latter is more efficient due to its 40 nm process compared to the older 65 and 55 nm chips.

MrS


Now that Nvidia is so good at rebranding, reusing, & slightly changing old chips. Would there be any sense in a 40nm version of the Asus Mars 295 Limited Edition? http://www.techpowerup.com/95445/ASUS_Designs_Own_Monster_Dual-GTX_285_4_GB_Graphics_Card.html It's old-tech, yes, but so is the GTS250/9800GTX/8800GTX. Maybe a 32nm flavor with GDDR5? If a 32nm is something Nvidia "might" want to do, would it be small enough, & possible to "glue" it together like what Intel did with their Core2 Quad? Two steps forward & one step back, just like Intel with PIII & P4...
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 17595 - Posted: 13 Jun 2010 | 15:59:35 UTC - in response to Message 17593.

They should be shipping the improved 40 nm version of GT200 in less than a month. It's called GF104 and is even better than a shrink of GT200: it's a significantly improved, more flexible and more efficient design due to its Fermi heritage. Remember: GF100 is not very attractive because it's insanely large and because it doesn't have enough TMUs. A half GF100 + double the amount of TMUs can fix both. See the Fermi thread in the other sub forum :)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 18333 - Posted: 12 Aug 2010 | 17:47:43 UTC - in response to Message 17595.

This is worth a quick look,
http://www.bit-tech.net/hardware/graphics/2010/08/05/what-is-the-best-graphics-card-for-folding/4

BarryAZ
Send message
Joined: 16 Apr 09
Posts: 163
Credit: 920,875,294
RAC: 142
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19025 - Posted: 24 Oct 2010 | 15:47:14 UTC - in response to Message 18333.

One of the calculations that needs to be included in a credit per dollar table is the frequency in GPUGrid of medium to long run computation errors which GPUGrid work units are very much prone to. This has been something of a long running problems (months, perhaps years).

After a hiatus of a couple of months, I tried GPUGrid again, this time with a 9800GT installed on a Windows 7 64bit system with the latest drivers for the 9800GT, no other GPU project running and no overclocking. Things looked OK for the first three work units (though I was a bit surprised by the 24 hour plus run time), then the fourth work unit went computation error after over 8 hours.

I don't see this sort of problem running the same cards with SETI, Collatz or Dnetc (though with them the run times are a lot shorter). For that matter, long CPU run times don't kick up computation errors (say on Climate or Aqua) or when then they do on Climate, the trickle credit approach there provides interim credit anyway.

In any event, the medium/long run computation errors that plague GPUGrid seem to me to be either application or work unit specific and act as a major disincentive to run GPUGrid for me.

Siegfried Niklas
Avatar
Send message
Joined: 23 Feb 09
Posts: 39
Credit: 144,654,294
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 19026 - Posted: 24 Oct 2010 | 16:29:26 UTC - in response to Message 19025.

I was running 4x 9800GT (G92 rev A2) on 4 different system with WinXP-32bit, Win Vista 32bit/64bit and Win7 64bit.
I had to "micro-manage" the workcache most time. During the last weeks I wasn't
able to crunch one single *-KASHIF_HIVPR_*_bound* (*_unbound*) without error.

A few days ago I decided to pull my G92 cards from GPUGrid - my 2x GTX 260 and 1x GTX 295 will remain.

Tex1954
Send message
Joined: 20 May 11
Posts: 16
Credit: 86,798,974
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwat
Message 21238 - Posted: 25 May 2011 | 6:31:26 UTC

Well, I finished a couple of those LONG tasks and good grief they are REALLY long.

I'm burning in two brand new EVGA GTX 560 Ti cards in my little 4.1GHz crunch macine before I install water blocks on them.

They are clocked at 951MHz and running the latest 275.27 Beta Nvidia drivers.

First long task took 42511 seconds (11.8 hrs) and the second LONG task took 69849 seconds (19.4Hrs).

First run gave me 3722 points per hour and the second longer run gave me 2721 points per hour on same boards.

Something seems fishy here... one would think the points would be more standardized and scaler when the same task is run on the same hardware...

8-)

Tex1954

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21242 - Posted: 25 May 2011 | 12:14:14 UTC - in response to Message 21238.

Different task types take different amounts of time to complete.
For example, p5-IBUCH_6_mutEGFR_110419-17-20-RND6259_2 and A39-TONI_AGG1-20-100-RND3414_1

Claimed 42,234.77 and Granted 52,793.46 suggests you did not return the task within 24h for the 50% bonus and instead received 25% for completing the task inside 2 days. If you overclock too much, tasks will fail and even before they fail completion times can be longer due to recoverable failures. Another problem can be with the drivers forcing the card to downclock.

Tex1954
Send message
Joined: 20 May 11
Posts: 16
Credit: 86,798,974
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwat
Message 21244 - Posted: 25 May 2011 | 12:32:01 UTC
Last modified: 25 May 2011 | 12:33:02 UTC

The EVGA GTX 560 Ti's versions I have are the Super Clocked varient and run 900MHZ normally and that's where they sit burning in before I install waterblocks on them. They pass all the hardest 3D tests and Memory tests I can throw at them, so no problems there I think.

I'm new to GPUGRID and didn't know about the bonus thing. My little crunching computer gets turn off and on a lot lately for hardware changes, software changes, water loop updates and such. I especially have problems with the GPU clocks switching to lower frequencies and never coming back! I've reported it to the Forum and Tech Support at Nvidia.

http://forums.nvidia.com/index.php?s=9f29a996e0ac9d6ea44a506f6631f805&showtopic=200414&pid=1237460&st=0&#entry1237460

However, for now, seems something I did worked and the clocks haven't shifted all night... but it's an ongoing problem. That last slow work unit is a result of the clocks shifting to slow speed all night and I didn't notice.

The latest Beta drivers are the same... major problems keeping the clocks at full speed.

Sigh... such is life...

:D

Tex1954

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21249 - Posted: 25 May 2011 | 19:56:52 UTC - in response to Message 21244.
Last modified: 27 May 2011 | 10:31:29 UTC

This under-clocking is becoming one of the biggest self-inflicted problems I have seen in GPU computing - an untreated plague. Only equaled by the unusable shaders (2 warp schedulers per 3 shader groups), and heat/fan issues of present/recent past - avoidable design/driver flaws.

Back on topic, the GTX570 is better here than the GTX560 Ti, and that will remain the case unless superscalar execution can be utilized by ACEMD to access the presently unusable shaders (33%).

Profile Mad Matt
Send message
Joined: 29 Aug 09
Posts: 28
Credit: 101,584,171
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 21584 - Posted: 3 Jul 2011 | 14:12:41 UTC - in response to Message 21249.

This under-clocking is becoming one of the biggest self-inflicted problems I have seen in GPU computing - an untreated plague. Only equaled by the unusable shaders (2 warp schedulers per 3 shader groups), and heat/fan issues of present/recent past - avoidable design/driver flaws.


A quite reliable solution I found is setting power management from adaptive to maximum performance. This way in case of computation errors the clocks won't fall back to 2D level and you can increase them again using Afterburner without a reboot.

What I found as most frequent reason for this downclocking - at least on Primegrid - is not enough voltage causing the computation error first. In my few efforts here GPUGRID was even more sensitive and hardly could run the same clocks.

____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21606 - Posted: 5 Jul 2011 | 10:38:03 UTC - in response to Message 21584.
Last modified: 5 Jul 2011 | 10:38:27 UTC

Unfortunately, under XP you can no longer set the GPU to Maximum Performance; it's stuck at Adaptive.

Andyboy
Send message
Joined: 17 Mar 11
Posts: 2
Credit: 691,286
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwatwatwat
Message 21898 - Posted: 28 Aug 2011 | 9:39:27 UTC

I hope it's the right thread - I'm mostly trying to optimise Credit/Watt on that next machine i'm building. Initial price is a minor issue. Obviously an efficient PSU and SSD will play a big role for the system. My question here:

"Is any info on support and perfomance of those new APUs (Sandy Bridge / Llano) out there yet? How do they work with additional GPUs?"

Manufactured in the new processes, they should be at least more efficient than an old CPU/GPU combo, right? Or not (yet) because...?

Any answers appreciated!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21902 - Posted: 29 Aug 2011 | 13:33:22 UTC - in response to Message 21898.

Sandy Bridge processors are more energy efficient than previous generations by Intel; a stock i7-2600 uses around 65W when crunching 8 CPU projects, an i7-920 (previous generation) uses over 100W and does less work. In terms of CPU crunching per Watt an i7-2600 is about 50% more efficient than an i7-920.
The Llano has a TDP of either 65W or 100W, including the on-die GPU (an A8-3850 has an integrated 6550D). CPU performance is similar to an Athlon II X4 @3.1GHz.
AMD GPU's are not supported here at GPUGrid, so the APU cores of the Llano cannot be used either. I think they might be useable at MilkyWay. Llano GPU performance is around that of the HD 6450 or 5570.
Intel's GPU cores cannot be used to crunch with here (or at any other project that I am aware of). While this might change in the future, at present Intel's graphical cores are basically a waste of die space for crunching. The SB's are still more efficient than Intel's previous CPU's and can do more CPU crunching than the Llano. For crunching here I think a SB is a good match for a Fermi 500 series GPU; it's efficient and can do lots of CPU work as well as support the GPU, and with just one CPU thread.

Which CPU to get depends on what you want to use the system for and what you want to crunch.
All that said, it's much more important to get a good GPU

Andyboy
Send message
Joined: 17 Mar 11
Posts: 2
Credit: 691,286
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwatwatwat
Message 21930 - Posted: 31 Aug 2011 | 17:45:27 UTC - in response to Message 21902.

Thanks, skgiven. Very informative answer!

I love your term "a waste of die space". That's just what i thought it would be. So in CPU-Terms I'll wait for 32nm CPUs without Graphics (Bulldozer and ?). And, like you said, care more about a good Graphic Card. However: Power Consumption is an even bigger issue there...

Just to be perfectly clear: For any given Graphic Card, the CPU firing it makes no difference?

Background: I want to do "something useful" - and this should exclude checking for primes oder doing yet another 10k tries for "3x+1" (Collatz). I loved Virtual Prairie, but that's only CPU.

Then again, those "less world improving" projects score the big points for each KWh spent. At least with my current Hardware, GT430.

In the end, I want to GPU-crunch useful projects (GPUGrid, Milky-Way, SETI) and still keep my Position in Top-10.000... and since the useful projects are "less generous" with points, I consider improving hardware. However, not at the cost of wasting more electricity!

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21932 - Posted: 31 Aug 2011 | 22:55:04 UTC - in response to Message 21930.
Last modified: 31 Aug 2011 | 22:56:50 UTC

The Intel Core i7-970, i7-980, i7-980X, i7-990X six core socket 1366 CPUs are also made on 32nm technology, without on-die GPU. You should consider them also, especially if you want to crunch with two GPUs. While the X58 chipset for socket 1366 CPUs have two native 16x PCIe 2.0 connectors, Socket 1156 and Socket 1155 CPUs have only one 16x PCIe 2.0 bus integrated into the CPU. So if someone uses two GPUs with these CPUs, the GPUs will have only 8x PCIe 2.0 per GPU, and it will lower the performance of the GPUGrid client. However, ASUS made a special MB for socket 1156 CPUs, with two native 16x PCIe 2.0 connectors: the P7P55D WS Supercomputer, and for socket 1155 CPUs (SandyBridge): P8P67 WS Revolution. Maybe other manufacturers have similar motherboard designs.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22055 - Posted: 11 Sep 2011 | 11:25:32 UTC - in response to Message 21932.
Last modified: 27 Sep 2011 | 19:52:26 UTC

I still don't think X8 makes all that much difference; well not in itself for 2 cards. For two GTX590's perhaps. There are other factors such as the 1366 boards having triple channel memory, 36 PCIE lanes, and 1156/1155 only having dual channel memory. The chipset that controls the PCIE lanes also controls USB, PCI, SATA and LAN, so what else is going on might influence performance. So it's really down to the motherboards implementation and what the system is being used for. Not sure about that P8P67 WS Revolution LGA 1155 Motherboard (2 x PCIe 2.0 x16 (x16, x8)), but the GA-Z68X-UD7-B3 is a bespoke implementation of an LGA 1155 motherboard that offers two full PCIE X16 lanes (x16, x16).

Don't know of a monitoring tool that measures PCIE use. If the memory controller load of ~20% on my GTX470 is anything to go by it's not exactly over taxed.

Anyway when the Sandy Bridge E type CPU's turn up their boards will have support Quad Channel Memory, PCI-e 3.0 and multiple x16 PCI-e lanes. Obviously these will become the CPU/board combination to have.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22056 - Posted: 11 Sep 2011 | 16:11:08 UTC - in response to Message 22055.

I still don't think X8 makes all that much difference; well not in itself for 2 cards. For two GTX590's perhaps.

I have 4 GTX 480s in two PCs:
1. Core2 Quad 9650 @4GHz (FSB 444MHz), X48 chipset, dual native PCIe x16 slots.
2. Core i7-870 @3.92GHz (BCLK 180), P55 chipset (PCIe x16 controller in the CPU)
After overclocking the Core 2 Quad, it is faster (got higher RAC) than the Core i7-870, either I put the 2 GTX 480s in the x16 slots (and have only x8 for both), or I put the second card to the third PCIe slot (and have x16 for the first card, and x4 for the second). The lower the GPU usage of a WU, the higher the impact on the performance of slower PCIe. As far as I can recall, it was around 10% comparing x8 and x16. You can see the difference (it's now around 20%) of x4 and x16 among my host's reults.

Anyway when the Sandy Bridge E type CPU's turn up their boards will have support Quad Channel Memory, PCI-e 3.0 and multiple x16 PCI-e lanes. Obviously these will become the CPU/board combination to have.

Of course, but I have to add that they will be quite expensive.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22058 - Posted: 11 Sep 2011 | 18:56:28 UTC - in response to Message 22056.

I see your Core2 Quad cards are about 12% slower than your i7-870's X16 GPU, but your i7-870's X16 GPU is 17% faster than the X4 GPU (just going by a few tasks). So overall your C2Q is only around 3.5% slower. Not sure if you are using HT or not, and what else you are crunching? HT might make a slight difference between your CPU types, C2Q and i7-870, but only a little if any between X16, X8 and X4. What the CPU is being used for and what else the controller is spending time on could also make a difference. The faster the CPU and GPU, the more it would potentially expose the weakness in PCIE lanes.
While I don't consider an 8.5% loss to be massive when at X8, it's not my system and there is a hidden catch; it's either 8.5% for both cards or 17% of one GTX480, so I take your point. Adding a second GPU on an X16/2x-X8 is really only adding 83% of a GPU.

When you also consider that a fully optimized Win7 system is still around 10 to 15% (say 13%) slower than XP, adding a second GTX480 in a W7 system that only supports PCIE X8 for 2GPU's, would mean the setup would be doing around 22% less overall compared to a similar XP system that was X16 capable for 2 cards.

I expect the difference might be different for different cards. PCI x8 deficiency might be slightly less for a GTX 460 and cards such as the GTS 450, but more for a GTX580.

So a good setup includes a good operating system, GPU, CPU and other components; the motherboards PCIE implementation (two X16 channels), RAM and HDD.

Presumably this would apply to a GTX590 as well; performance would be around 8.5% less than expected as the dual card is sharing a single X16 PCIE lain set.

Would be interesting to know what this is for other cards in practice under different architectures (LGA 1366, 1155, AMD rigs) as well as your LGA 1156 setup.

At the minute I think it means anyone with an older dual X16 board and one GPU should hold off getting a new system just to add a new card. Just buy the second card, unless you want to fork out for the faster CPU and can find a dual X16 motherboard at a reasonable price.

Profile Fred J. Verster
Send message
Joined: 1 Apr 09
Posts: 58
Credit: 35,833,978
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22116 - Posted: 15 Sep 2011 | 20:23:54 UTC - in response to Message 22058.
Last modified: 15 Sep 2011 | 21:02:42 UTC

I'm still satisfied with my 2 year 'old' C2Extreme X9650, now running @ 3.51GHz,
with stock air cooler, but no case. (Draws 335 Watt doing a GPUgrid WU)
Running 2 or 3 SETI MB WUs, it needs 410Watt.
On an ASUS P5E motherboard, X38 chipset, DDR2 memory runs just below 400MHz
(per stick), FSB=1524MHz. O.S. Windows XP64 Pro, 4 GiG DDRII.
And a GTX480 running in a PCIe ver.2.0 x16 bus. (Also runs SETI@home and EINSTEIN). (This type "eats anything")

And this year, I build an i7-2600 system, with 2 ATI 5870 GPUs. (Cross-Fire)
On an INTEL DP67BG motherboard, O.S. MULTI-BOOT: UBUNTU/DEBIAN/Windows 7, all 64bit (OEM) (USB 2.0 & 3.0; eSATA), DDR3 1333MHz. and does
SETI, SETI Bêta ATI GPGPU, using OpenCL, which also can do anything from VLAR, and 'other' Multi Beam WUs. And CPDN (~500 hour) work.

Also AstroPulse, avoided by a lot of people ;-). Time consuming on AMD CPU, take
about 8 hours on a X9650(@3.51GHz) and half the time and 2 at a time on a 5870
GPU!
Those SANDY BRIDGE, like i7-2600(K), sure are efficient, doing 1.65x (HT=on) X9650(@3.51GHz) with 85 v.s. 120Watt.

MilkyWay and Collatz C. works great. MW hasn't much work, a.t.m.
____________

Knight Who Says Ni N!

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22160 - Posted: 24 Sep 2011 | 21:21:02 UTC - in response to Message 22058.

Not sure if you are using HT or not, and what else you are crunching?

I'm using HT, but only 4 tasks are running at the same time. (2 GPUGrid and 2 Rosetta@home)

I expect the difference might be different for different cards. PCI x8 deficiency might be slightly less for a GTX 460 and cards such as the GTS 450, but more for a GTX580.

It's obvious. That's why the GTX 590 (made of 2 underclocked GTX 580s) has its own NForce 200 PCIe x16 to 2 PCIe x16 bridge. In this way both chips have PCIe x16. Could NVidia put 2 GPUs on a single board without this bridge chip, and both GPUs would have PCIe x8 then, but it would decrease the the overall performance too much. It's impractical for a top-end dual GPU card.

Would be interesting to know what this is for other cards in practice under different architectures (LGA 1366, 1155, AMD rigs) as well as your LGA 1156 setup.

I couldn't resist to have the ASUS P7P55 WS SuperComputer MB I've mentioned earlier, so now my LGA1156 host runs with it.
I kept the old P7P55D Deluxe MB too (and I bought an other Core i7-870 for it), so I can put the two GTX 480s from my Core2 Quad host to this MB, and we can compare the dual PCIe x8 setup with the dual PCIe x16 setup.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22200 - Posted: 2 Oct 2011 | 13:02:21 UTC - in response to Message 22160.
Last modified: 2 Oct 2011 | 14:16:22 UTC

Tested one of my i7-2600 systems, with a GTX470 @657MHz. Turns out the Gigabyte PH67A-UD3-B3 has one x16 slot and one x4 slot! The difference was around 13%. So a GTX580 would be >13% slower and a GTX460 would be less. Looked at several 1155 motherboards and on some the second slot is only x1 when both are occupied.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22210 - Posted: 4 Oct 2011 | 0:38:16 UTC - in response to Message 22200.

PCIE 3 is also now available on some motherboards (MSI mostly). While these boards still tend to offer one X16 and either one x8 or x4 the improvement from PCIE2.0 is basically two fold; the bandwidth doubles from PCIE 2 to PCIE 3.
So a PCI Express x16 Gen 3 slot provides 32GB per sec compared to the 16GB per sec for PCIE2, and a PCIE3 slot at x8 is just as fast as a PCIE2 slot at x16.
The LGA1155 Z68A-G45 (G3), for example, has two PCIE3 slots. If one is used it operates at x16 (32GB per sec) and if two are used both are x8 (16GB per sec each).

toms83
Send message
Joined: 12 Oct 09
Posts: 3
Credit: 4,026,093
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 22441 - Posted: 31 Oct 2011 | 23:16:17 UTC

I'm thinking about Gigabyte GA-890FXA-UD7 and 3x570.

Slots speeds would be 8x, 4x, 8x ( with one slot free between graphics, better cooling ). I'm afraid about that one with 4x. How much performance drop may I expect?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22447 - Posted: 1 Nov 2011 | 22:25:16 UTC - in response to Message 22441.

A guesstimate, but for a drop from PCIE x16 to x8 you might see a performance drop of around 8% on each GTX570. A further PCIE bandwidth drop to X4 would see the performance drop to around 16% less than X16. So overall you would be losing about 1/3rd of a GPU.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile microchip
Avatar
Send message
Joined: 4 Sep 11
Posts: 110
Credit: 326,102,587
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22631 - Posted: 5 Dec 2011 | 13:40:59 UTC
Last modified: 5 Dec 2011 | 13:43:37 UTC

Since we're talking about PCIe lately, I've also got a question

I'm running a GTX 560 on a board that only has a PCIe 1.0 x16 support (old nForce 630a chipset). How much of a roughly performance penalty am I getting for running the GTX 560 on a PCIe 1.0 bus? I've already ordered another mobo which has PCIe 2.0 but am curious about the penalty I'm getting currently on the PCIe 1.0 bus. Does it even matter much for GPUGRID?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 22632 - Posted: 5 Dec 2011 | 16:14:06 UTC - in response to Message 22631.

My guess is between 4 and 8%, but let us know, when you get your motherboard and run a few tasks.
PCIE2.0 x16 has twice the width of PCIE1.0 x16; 8GB/s vs 4GB/s. As your GTX560 is not a very high end crunching GPU (mid range), performance will not be impacted as much as it would be for a GTX570 for example, otherwise it would be around 8%.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Mike Hamm
Send message
Joined: 23 Aug 12
Posts: 1
Credit: 913,492,326
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 27132 - Posted: 23 Oct 2012 | 17:13:55 UTC - in response to Message 22632.

Looks like the nvidia 680 690 are the way to go.

http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units

Best gflops per watt.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27141 - Posted: 24 Oct 2012 | 9:55:02 UTC - in response to Message 27132.

Going by wiki, a GTX660Ti and a GTX660 are better than a GTX680 and a GTX670!

    GTX660 16.22GFlops/Watt
    GTX660Ti 16.40GFlops/Watt

    GTX670 14.47GFlops/Watt
    GTX680 15.85GFlops/Watt

    GTX690 18.74GFlops/Watt


To state the obvious - The GTX690 is two cards, and rather expensive.
Purchase costs and running cost are important too.

Anyway these are GFlops per Watt for the card only, not the system.
You might be able to get 3 GTX660Ti cards for around about the same as one GTX690.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

mikey
Send message
Joined: 2 Jan 09
Posts: 294
Credit: 4,846,798,615
RAC: 23,124,140
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27460 - Posted: 28 Nov 2012 | 15:39:42 UTC - in response to Message 22632.

My guess is between 4 and 8%, but let us know, when you get your motherboard and run a few tasks.
PCIE2.0 x16 has twice the width of PCIE1.0 x16; 8GB/s vs 4GB/s. As your GTX560 is not a very high end crunching GPU (mid range), performance will not be impacted as much as it would be for a GTX570 for example, otherwise it would be around 8%.


I have seen webpages where people also say the difference is not as great as one would think it might be. 10% is not that big of a deal when talking about a 150.00US motherboard. Now going to a new mb and going to 16gb of ram too IS a big deal! The ram of course is not used for gpu crunching very much, but does help when crunching cpu units too. Getting a new mb, new ram AND a new cpu along with that pcie-2 slot...now THAT is DEFINITELY worth it!!

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28874 - Posted: 28 Feb 2013 | 18:14:22 UTC

Anyone up to updating the OP?

Question, supposedly the GPUGrid app only utilizes 2/3 of the shaders on the GTX 460 (GF104). Yet the GPU utilization on all my 460s run at 88-89% and the cards run hotter than at other projects. As I remember, when this was first identified as a problem the 460s ran with low GPU utilization and very cool. Are you SURE it's still an issue? If only 2/3 of the shaders are being used why are the temps relatively high as is the GP utilization? Just wondering...

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 28877 - Posted: 28 Feb 2013 | 19:21:48 UTC - in response to Message 28874.


If only 2/3 of the shaders are being used why are the temps relatively high as is the GP utilization?


That's not an issue any more. The 4.2 apps do a good job of using the cc2.1 Fermis.

MJH

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28882 - Posted: 28 Feb 2013 | 21:08:44 UTC - in response to Message 28877.

If only 2/3 of the shaders are being used why are the temps relatively high as is the GP utilization?

That's not an issue any more. The 4.2 apps do a good job of using the cc2.1 Fermis. MJH

Thanks for the update, I thought something had changed for the better.

Profile Raubhautz*
Avatar
Send message
Joined: 18 Nov 12
Posts: 9
Credit: 1,867,450
RAC: 0
Level
Ala
Scientific publications
watwatwat
Message 30136 - Posted: 21 May 2013 | 4:55:36 UTC - in response to Message 28874.

Anyone up to updating the OP?

Question, supposedly the GPUGrid app only utilizes 2/3 of the shaders on the GTX 460 (GF104). Yet the GPU utilization on all my 460s run at 88-89% and the cards run hotter than at other projects. As I remember, when this was first identified as a problem the 460s ran with low GPU utilization and very cool. Are you SURE it's still an issue? If only 2/3 of the shaders are being used why are the temps relatively high as is the GP utilization? Just wondering...


Interesting... I notice that the GIANNI (short tasks) uses just under 84% cpu! Not only that, it takes 124k-sec for a wu!!! That is almost 2x it takes to do the NATHAN (long tasks), which average 72k-sec on my Quadro-K2000M (Keppler w/2GB RAM).

Can anyone explain why the 'short' tasks take almost 2x as long as the 'long' versions?

My next step, out of sheer curiosity, I will try the same tests with my other machine which is a similar video, but Fermi design (Quadro-2000M).

Thank you in advance.

Phil
____________

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 30144 - Posted: 21 May 2013 | 8:28:26 UTC - in response to Message 30136.
Last modified: 21 May 2013 | 8:28:47 UTC

http://www.gpugrid.net/forum_thread.php?id=3370&nowrap=true#30017

Profile Raubhautz*
Avatar
Send message
Joined: 18 Nov 12
Posts: 9
Credit: 1,867,450
RAC: 0
Level
Ala
Scientific publications
watwatwat
Message 30193 - Posted: 22 May 2013 | 10:56:36 UTC - in response to Message 30144.
Last modified: 22 May 2013 | 10:57:33 UTC

http://www.gpugrid.net/forum_thread.php?id=3370&nowrap=true#30017



Uh, yeah. Hello. Here is a copy of the linked 'response' you posted:

There are two main reasons why more credit is awarded on the long queue:
1) Greater risk for the cruncher to crunch (typically) much longer WUs. If a simulation crashes after 18 hours, that's a much bigger loss than a crash after 2 or 6 hours. This is especially true for older/slower cards.
2) To encourage/reward users who dedicate their computers to the long hours crunching that long WUs require. With he short queue, you can run a WU while you sleep or run errands, for example, and by the time you wake up or come home it's finished, and you can use your computer for other things. Dedicating a gpu/cpu/computer to run on long queue means you basically can't use it for other things, such as work, entertainment, etc., and so higher credits reward them for that.

Gianni's WUs may be slightly long for the short queue, but my recent tasks were definitely short for the long queue. The reason was that, at the time, my WUs were the only work for anyone to crunch, so I didn't want to make them too long in case people who typically crunch on short tasks wanted WUs to do, but couldn't sacrifice the time. Basically, it was due to the circumstances at that time. We have had a few weeks recently where I was the only one who had work for you guys, which was never a common situation for us. However, we keep adding more users, and it is becoming harder to come up with ideas fast enough (which is a good problem to have!). We are also trying to bring in new scientists!


Very neat-o. This explains how GPUGrid came up with their terminology.... it does NOT answer my question.

Using the same equipment, these GIANNI are taking 2x as long to run as the NATHAN. Why? You are supposedly compiling with the same options and both are CUDA42.

Can somebody answer this question that has more than 15 posts experience; someone who will read the question before attempting to provide a single word answer? Thank you.
____________

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 30195 - Posted: 22 May 2013 | 11:05:08 UTC - in response to Message 30193.

Gianni's WUs may be slightly long for the short queue, but my recent tasks were definitely short for the long queue


Other than that, if you are asking why system A is slower than system B then the answer is probably that system A contains more atoms / is more complicated, or the person sending the simulations asked for more simulation steps. Not all simulations are born equal :)
I think the new NATHAN's are indeed long enough for the long queue.

I hope that answered your question. I just wanted to point out that the NATHAN's you were talking about were not "standard" for the long queue.

If you suspect it's your hardware setup then I unfortunately cannot help you and maybe you should make a new thread about it for someone to help you with.

Cheers

MrJo
Send message
Joined: 18 Apr 14
Posts: 43
Credit: 1,192,135,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwat
Message 37798 - Posted: 3 Sep 2014 | 13:18:14 UTC

Back to Credit per €:

I have made a few observations regarding the consumption (in watt) and results delivered. The results of my graphic cards I have held in a Exel spreadsheet. The table can be downloaded here:

PerformanceProWatt.xls

I had 4 Nvidia Cards for comparison, GTX 770, GTX 760, GTX 680 and GTX 750 Ti. With a Xavax energy meter I have the actual consumption measured in idle and load operation. Example: PC with GTX 770 idle: 80 Watt, with GPIGrid running load: 245 Watt. So running GPUGrid consumes 165 watts. Settings for crunching are only ACEMD long runs (8-12 hours on fastest GPU). In order to obtain a very precise average, I used the average of about 20 results per card.

The end result can be found in the orange cell. In short Words:

The GTX 770 supplies 105 points/watt
The GTX 760 supplies 110 points/watt
The GTX 680 supplies 116 points/watt
The GTX 750 Ti supplies 187 points/watt

So the GTX 750 Ti is the most efficient graphics card at the moment.

Running each PC with 1 graphic card individually is the most expensive way to operate GPUGRID. So my next project is an aged CPU (Core i7 860) with 2 GTX 750 Ti running GPUGrid. In my estimation, the machine will need about 190 watts while achieving about the score of a GTX 780.

Costs:
Running a machine consuming 190 watts will result in 1.659,84 kWh per year (24/7/365) which costs € 448.- (0,27 € pro kWh)

More ideas for a efficiente use requested ;-)


____________
Regards, Josef

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 38135 - Posted: 28 Sep 2014 | 18:29:52 UTC - in response to Message 37798.
Last modified: 28 Sep 2014 | 18:53:32 UTC

More ideas for a efficient use requested ;-)

Four GTX970's in a quad core system that supports 4 PCIE slots would probably be the most economically productive system at present. You could probably build a base unit for < £1500, get 2.5M credits/day (at present performance rates) and over 2 years at £0.15/Kw it would cost ~£1500 to run (assuming 582W draw) at which time the value of the system would be ~£500. So total cost of ownership (TCO) would be ~£2500 (~$4000 or ~3200Euro) over 2 years for a credit of 1.8B.
Obviously prices vary by location and running costs depend on the price of electric which varies greatly from one place to another and the app could improve performance...

For comparison, a quad GTX780Ti system would yield ~3.3M credits per day, but the purchase cost would be >£2100, the running cost would be ~£2600 over 2years and the system would only be worth about the same (older kit). So £4200 for 2.4B.

9700 system ~750M credits/£ over 2years
780 Ti rig ~570M credits/£ over 2years
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 38162 - Posted: 29 Sep 2014 | 12:38:52 UTC - in response to Message 38135.

More ideas for a efficient use requested ;-)

Four GTX970's in a quad core system that supports 4 PCIE slots would probably be the most economically productive system at present. You could probably build a base unit for < £1500, get 2.5M credits/day (at present performance rates) and over 2 years at £0.15/Kw it would cost ~£1500 to run (assuming 582W draw) at which time the value of the system would be ~£500. So total cost of ownership (TCO) would be ~£2500 (~$4000 or ~3200Euro) over 2 years for a credit of 1.8B.
Obviously prices vary by location and running costs depend on the price of electric which varies greatly from one place to another and the app could improve performance...

For comparison, a quad GTX780Ti system would yield ~3.3M credits per day, but the purchase cost would be >£2100, the running cost would be ~£2600 over 2years and the system would only be worth about the same (older kit). So £4200 for 2.4B.

9700 system ~750M credits/£ over 2years
780 Ti rig ~570M credits/£ over 2years


Skgiven, or anybody else--- I wondering what you would recommend for GPUGRID? I'm in the process of Building a DIY Haswell Xeon ( LGA-2011-3 socket for two CPU on MB/Quad channel memory) and 2 Low powered Z97 Dual channel systems. Two boards will have as many GPU possible. Also a decent amount of storage. ( A third will have 2 GTX660ti and maybe GTX750ti included.)

My electric rates sky rocketed over few months, where I currently reside. (It has it own power station, and Town power company, but there been ongoing dispute about cost of distribution, creating rates from 0.0876cents/Kw to over 0.1768cents/Kw. These prices will be in effect for at least 6-9 months unless term agreement can be meant on both sides. Also, State energy taxes rose 20% since September of last year.

For my LGA-2011-3 (2P) Quad channel DDR4 MB I will either choose a ASUS Z10PE-D8 WS (600usd) or GIGABYTE GA-7PESH3[640usd] (or something else), possible to have more than 4 GPU's on GA motherboard. For processors (quad channel), I already bought [2]-85TDP 6c/6t 2603V3 for 220usd each. I'm also considering a Z97 Board (LGA1150) for a low powered Xeon or i5/i7. (25TDP- 1240LV3 (4c8t) listed at 278$) or a 192usd i5-4590T[4c4t]45WTDP or a 303usd 35TDP i7 4785T[4c8t].

All newly bought GPU's will be Maxwell based. I'm thinking for LGA-2011-3 board, [4] GTX 970, (GTX980 price/performance/Wattage ratio is higher as you say.) unless a flaw been discovered in GPC. For the Z97 board, if a GTX 960 released shortly, I'll get [3].

I was given a couple second hand GTX660ti, by my sister, when she updated to GTX 980 last week for the gaming system I built for her. ( she lives out of state, so I don't access to test new GM204. So for these Two C.C3.0 cards I was thinking about picking up Z97 board, and a i5 4590T. I will "eco-tune" all cards including Maxwell, while waiting for GTX960 prices. (It's possible I keep GTX660ti, when GTX960 is released, or get more GTX970, depending on GTX960 price.)

Total system(s) cost (5 GTX 970 or 3 GTX 970/2 GTX 960/ 1 GTX750ti/ 2 Xeon 2603V3/ 2 i5 4590T or 2 Xeon 1240LV3/1 Server MB/ 2 Z97 MB) will be (not including) PSU units will be around 3680- 4000usd, which is being piece meal ( looking for all and any discounts.)

Thank you for help and advice.


MrJo
Send message
Joined: 18 Apr 14
Posts: 43
Credit: 1,192,135,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwat
Message 38236 - Posted: 1 Oct 2014 | 18:05:11 UTC - in response to Message 38135.
Last modified: 1 Oct 2014 | 18:16:37 UTC

Four GTX970's in a quad core system that supports 4 PCIE slots would probably be the most economically productive system at present.

Good Idea.. Is it necessary that all PCIe slots are running with x16 to obtain the full performance of the cards? Or is full power also available with 4 x8?


creating rates from 0.0876cents/Kw to over 0.1768cents/Kw.

A perfect and more than affordable energy world. Here in Germany we can only dream of prices like 0.1768 cents/kwh. Current price I have to pay is 0.27 Euro/kw/h which is 0.3406401 US-Dollar. Is it true? You pay only 0.1768 cents/kwh?
____________
Regards, Josef

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 38237 - Posted: 1 Oct 2014 | 18:21:34 UTC - in response to Message 38162.

WE use the Asus Z87-WS motherboard - 4 GPUs at 8x.

Matt

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 38238 - Posted: 1 Oct 2014 | 22:14:40 UTC - in response to Message 38236.

I made a small mistake figuring kwh cost. Considering how energy costs are all over place, and each Country applies a different formula for prices- I went back to look at my 2014 August-September Power bill to see exactly what is included-- currently 0.1112c/kwh is the flat rate (was 0.08-0.10c/kwh from Sept2013-March2014), add a 0.05982c/kwh surcharge for taxes, and 0.0354c/kwh for distribution. This beings it currently to ~0.19cents/kwh/usd. Natural Gas (my hot water) is opposite: taxes are less than distribution costs. I have three wood stoves for heat during winter. (Electrical costs are decent compared to 0.34c/kwh for Germany) In 2013- average total for kwh was 0.1445c. Average amount of "surcharges" in 2013 for each kwh was 0.0443cents. Cost for energy resources have rose considerably in every industrial area.

In Germany how much of Total price is Taxes or "surcharges"? (Of Germany imports, what's the energy percentage?) In the States, certain energy policies (more regulations) have made the cost to run (large potion where power comes from) Coal fired plants on (East Coast) very difficult to keep energy prices in order. New Coal or Nuclear Power Plants aren't being built. Wind/Solar/Biodegradable (cost outweighs any benefit) is okay for a private homeowner (who can afford it), not an entire Country. Sensible energy policies rarely exist anymore.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45122 - Posted: 2 Nov 2016 | 20:26:22 UTC

I the course of upgrading my GPUs to Pascal I have worked through a lot of specs and characteristics. I thought that Maxwell was already a major leap forward in regard to efficiency and was somewhat surprised when I compared multiple generations by the GFLOP to power ratio. (Power draw taken from Wikipedia, typical 3D consumption) See the graph below.

No doubt that Fermi cards should be replaced very soon because of high energy cost, but Kepler apparently performs better than its reputation. Maxwell (to my surprise) improved the efficiency just moderately.

Side note: the 750(ti) is already Maxwell architecture and therefore runs more efficient than the other 700 series (Kepler) GPUs.

From this view, the 600 and 700 series are rightly still and widely in use. But in case crunchers with Kepler cards decide to upgrade even so, they should pass over Maxwell and upgrade to Pascal by all means.


____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45123 - Posted: 2 Nov 2016 | 22:03:21 UTC - in response to Message 45122.
Last modified: 2 Nov 2016 | 22:03:33 UTC

Nice graph, and I completely agree with what you said about upgrading!

Kepler apparently performs better than its reputation. Maxwell (to my surprise) improved the efficiency just moderately.

It's because they are based on the same 28nm lithography (TSMC had to skip the 20nm step, proposed to use for Maxwell).

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45124 - Posted: 2 Nov 2016 | 22:24:25 UTC - in response to Message 45122.

I the course of upgrading my GPUs to Pascal I have worked through a lot of specs and characteristics. I thought that Maxwell was already a major leap forward in regard to efficiency and was somewhat surprised when I compared multiple generations by the GFLOP to power ratio. (Power draw taken from Wikipedia, typical 3D consumption) See the graph below.

No doubt that Fermi cards should be replaced very soon because of high energy cost, but Kepler apparently performs better than its reputation. Maxwell (to my surprise) improved the efficiency just moderately.

Side note: the 750(ti) is already Maxwell architecture and therefore runs more efficient than the other 700 series (Kepler) GPUs.

From this view, the 600 and 700 series are rightly still and widely in use. But in case crunchers with Kepler cards decide to upgrade even so, they should pass over Maxwell and upgrade to Pascal by all means.



Really? What about price? Not everyone, should I say most people can afford new Pascal cards. So, are they now regarded as secomd class.

I also woud like to add that most Pascal card owners are a part of a "vanity" project.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45125 - Posted: 2 Nov 2016 | 22:55:17 UTC - in response to Message 45124.

Really? What about price? Not everyone, should I say most people can afford new Pascal cards. So, are they now regarded as second class.

I consider the older cards as second class, as being that much more effective than the previous generation(s) the Pascal's price can be saved in a short term on electricity costs.

I also would like to add that most Pascal card owners are a part of a "vanity" project.

I would rather call it "green" project. (Not because it's NVidia's color, but because it's twice as environment friendly as older cards)

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45126 - Posted: 2 Nov 2016 | 23:00:27 UTC

No offense meant. The below graph just illustrates very clearly that Pascal GPUs pay off quickly for hardcore crunchers. For example the GTX 780ti is still a well performing high end Card, but the GTX 1070 yields the same GFLOPs at 70-80W less. If you run the GPU almost day and night you will have extra cost of 0.08kW*8000hrs*0,2€/kWh=120€. A new GTX 1070 is available from 400€ and the GTX 780ti can be sold as second hand for maybe 150€. Which means you get a new card without further investment after two years, just because of saving energy.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45129 - Posted: 2 Nov 2016 | 23:46:35 UTC
Last modified: 3 Nov 2016 | 0:30:01 UTC

Other examples. Take the renowned Kepler GTX 780. The new GTX 1060 yields 10% more SP GFLOPs at 80-100W less power consumption. Leads to the same saving of 100-150€ per year.

And upgrades from Fermi will pay off even more quickly. Replacing the GTX 580 with a favorable GTX 1050 (not in the graph yet) will speed performance up by 10-15% but reduce the power draw by more than 150W!

I would rather call it "green" project. (Not because it's NVidia's color, but because it's twice as environment friendly as older cards)


yes, that is another valid argument.

Really? What about price? Not everyone, should I say most people can afford new Pascal cards. So, are they now regarded as secomd class. I also woud like to add that most Pascal card owners are a part of a "vanity" project.


I would neither label Maxwell users as "second class" nor Pascal users as "Snobs". If someone aims at a well performing card and just crunches now and again, a GTX 980 (maybe aside from the low 4GB Memory size) is absolutely a good choice. But for 24/7 operation Pascal makes much more sense in terms of energy cost.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45136 - Posted: 3 Nov 2016 | 11:59:57 UTC - in response to Message 45126.
Last modified: 3 Nov 2016 | 12:03:40 UTC

No offense meant. The below graph just illustrates very clearly that Pascal GPUs pay off quickly for hardcore crunchers. For example the GTX 780ti is still a well performing high end Card, but the GTX 1070 yields the same GFLOPs at 70-80W less. If you run the GPU almost day and night you will have extra cost of 0.08kW*8000hrs*0,2€/kWh=120€


...I have meant annual extra cost of course. Your electricity rate (€ per kWh) may differ from my above example.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45140 - Posted: 3 Nov 2016 | 16:48:05 UTC - in response to Message 45136.
Last modified: 3 Nov 2016 | 16:49:47 UTC

That chart might not accurately reflect performance running GPUGrid apps!

In the UK the best GPU in terms of performance per £purchase is the GTX1060 3GB.
Based on the reports here that Pascal GPU’s can boost to ~2GHz I’ve put together a more realistic table of boost performance vs price for the Pascal range:

GeForce GTX TFlops (Boost) @2GHz £UKCost GFlops/£Cost 1050 2.56 120 21.3 1050Ti 3.07 140 21.9 1060 3GB 4.61 190 24.3 1060 6GB 5.12 240 21.3 1070 7.68 395 19.4 1080 10.24 600 17.1

Assumes all cards boost to 2GHz (and the 14nm cards might not). This is only theoretical and ignores factors such as app scaling, supporting OS, CPU, RAM, CPU cache, GPU L2 cache, task performance variations…
Both performance/Watt and performance/purchase cost (outlay) are relevant. Reports of 45W actual use when crunching here on a GTX1060-3GB might throw the previous graphs data all over the place.
The best measurements are actual observations (for here) and not theoretical. So what do cards actually boost to & what is the actual performance running specific task types? (post elsewhere or adapt for here).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45197 - Posted: 5 Nov 2016 | 12:24:31 UTC
Last modified: 5 Nov 2016 | 12:25:08 UTC

Taking performance/purchase cost into consideration is a valid argument. But in this case also the service life until replacing the GPU is of importance in order to calculate the amortization.

Reports of 45W actual use when crunching here on a GTX1060-3GB might throw the previous graphs data all over the place.


Well, I have already read many different power draw and utilization statements in this forum affecting all kind of GPU. From 45W to 65W or more for the 1060 in particular. So I guess we will never have perfect numbers and utilization of cards under every OS. From that view, the below graph should be a workable reference point, although far from perfection.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45213 - Posted: 5 Nov 2016 | 16:33:56 UTC - in response to Message 45197.
Last modified: 5 Nov 2016 | 16:50:15 UTC

IMO 2years is a reasonable 'working' life expectancy for mid to high end GPU's, and that should form part of any TCO analysis for system builds/refurbs/performance per cost measurements.

The main reason for ~2years is the time between two generations of GPU. After 2years the GPU's you have also retains some value, but that drops off almost completely when another GPU generation becomes available, circa 4years. So it's good to upgrade the GPUs every 2 years or so. Obviously it depends on the cost, & app performance. ATM Pascal's are still way too expensive IMHO.

The main base units don't need upgrading every 2 years, but after 4 years (2 GPU generations) you would still get a descent return, and the 'upgrade' might actually be worth bothering about. After 6 years (broadly speaking) the return drops off. Of course this depends on CPU generations and what they bring, if anything. All that tic-tock talk and die shrinkage brought next to nothing in terms of CPU GFlops performance for crunchers, especially GPU crunchers. As Intel strove to enhance their little integrated GPU's they ignored/starved discrete GPU crunchers of the necessary PCIE support required, unless you spend stupid money on needless kit. AMD stuck to PCIE2 as their GPU's don't do complicated programming (CUDA) and don't need higher PCIE bandwidth. NV doesn't make mainstream CPU's, so they've been at the mercy of this unscrutinised duopoly.

If you get an extreme end model and you buy early then perhaps 2.5 to 3years might be what you would want, though it's not always what you get! Sometimes the extreme end cards turn up last. The GTX780Ti (Kepler) arrived 6months after the 780 and 10months before the GTX980 (Maxwell) which offered similar performance for less power usage. With less than 2 years usage you may not realize the performance/Watt benefit over purchase cost for the bigger cards (though that varies depending on purchase costs and electric costs).

The 45W, 65W and so on are just observations. Some might be accurate, others might have been down-clocked or influenced by CPU overuse at the time...
I know my GTX1060-3GB uses from 75 to 110W on Linux with reasonable GPU usage (80% or so) because I've measured it at the wall. With such change (35W) depending on what tasks are running we need to be careful with assessments.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Rabinovitch
Avatar
Send message
Joined: 25 Aug 08
Posts: 143
Credit: 64,937,578
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 45255 - Posted: 14 Nov 2016 | 10:33:27 UTC

Does anyone know what is the better choice - 750 Ti or 1050Ti (both 4Gb)?
____________
From Siberia with love!

kain
Send message
Joined: 3 Sep 14
Posts: 152
Credit: 702,932,245
RAC: 2,097,127
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 45257 - Posted: 14 Nov 2016 | 14:13:06 UTC - in response to Message 45255.

Does anyone know what is the better choice - 750 Ti or 1050Ti (both 4Gb)?


1050 Ti of course.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47018 - Posted: 18 Apr 2017 | 15:34:19 UTC
Last modified: 18 Apr 2017 | 15:35:23 UTC

As an addition to my below graph, the new 1080ti yields 53 GFLOPS/Watt and the Titan XP about the same, assuming that it pulls 220-230W just like the Titan X. Which means the efficiency is identically equal to the non-"ti" 1080. So much for theory...
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,672,242,755
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 47110 - Posted: 26 Apr 2017 | 13:34:19 UTC - in response to Message 47018.

As an addition to my below graph, the new 1080ti yields 53 GFLOPS/Watt and the Titan XP about the same, assuming that it pulls 220-230W just like the Titan X. Which means the efficiency is identically equal to the non-"ti" 1080. So much for theory...


Since the 1070 is the same GP104 die, it performs very similarly to the gtx 1080 in this project, costs much less, and uses slightly less power. The 1070 is the best performance per watt, performance per dollar card for this project.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 47111 - Posted: 26 Apr 2017 | 13:40:19 UTC - in response to Message 47110.

As an addition to my below graph, the new 1080ti yields 53 GFLOPS/Watt and the Titan XP about the same, assuming that it pulls 220-230W just like the Titan X. Which means the efficiency is identically equal to the non-"ti" 1080. So much for theory...


Since the 1070 is the same GP104 die, it performs very similarly to the gtx 1080 in this project, costs much less, and uses slightly less power. The 1070 is the best performance per watt, performance per dollar card for this project.


I agree.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,361,527,550
RAC: 283,434
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52513 - Posted: 15 Aug 2019 | 15:16:18 UTC - in response to Message 47110.

Since the 1070 is the same GP104 die, it performs very similarly to the gtx 1080 in this project, costs much less, and uses slightly less power. The 1070 is the best performance per watt, performance per dollar card for this project.


I have a 1660Ti that I would like to test out but but but …

1487 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for Short runs (2-3 hours on fastest card)
1488 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for Long runs (8-12 hours on fastest card)
1489 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for New version of ACEMD
1490 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for Anaconda Python 3 Environment
1491 GPUGRID 8/15/2019 10:11:04 AM Project has no tasks available

Erich56
Send message
Joined: 1 Jan 15
Posts: 1120
Credit: 8,948,070,176
RAC: 31,109,761
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 52514 - Posted: 15 Aug 2019 | 16:00:37 UTC - in response to Message 52513.


I have a 1660Ti that I would like to test out but but but …

1487 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for Short runs (2-3 hours on fastest card)
1488 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for Long runs (8-12 hours on fastest card)
1489 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for New version of ACEMD
1490 GPUGRID 8/15/2019 10:11:04 AM No tasks are available for Anaconda Python 3 Environment
1491 GPUGRID 8/15/2019 10:11:04 AM Project has no tasks available

well, you have been with us for 11 years now; so you should have gotten used to this kind of situation :-)

Greg _BE
Send message
Joined: 30 Jun 14
Posts: 135
Credit: 117,694,439
RAC: 266,361
Level
Cys
Scientific publications
watwatwatwatwatwat
Message 53064 - Posted: 23 Nov 2019 | 9:21:57 UTC
Last modified: 23 Nov 2019 | 9:27:38 UTC

I am currently running a GTX 650 (from an older pc) and a GTX 1050TI.
I am thinking of replacing the 650, but don't want to spend a ton of money.
I saw on a deep learning page that they thought the 1060 was the best of the GTX 10 series, but you guys are saying 1070.

So which is better for the money spent and the power used?
GTX 1070 or 1060? And then vanilla for TI version?

KAMasud
Send message
Joined: 27 Jul 11
Posts: 137
Credit: 523,901,354
RAC: 0
Level
Lys
Scientific publications
watwat
Message 53067 - Posted: 23 Nov 2019 | 11:40:43 UTC - in response to Message 53064.

Running a 1060 and quite happy with it.
As an aside, can anyone comment on external GPU's for laptops? My CPU's are idle.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 579
Credit: 8,927,587,024
RAC: 17,280,704
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53077 - Posted: 23 Nov 2019 | 19:12:34 UTC - in response to Message 53064.

So which is better for the money spent and the power used?

Following thread might be of your interest:
http://www.gpugrid.net/forum_thread.php?id=4987
Personally, I find GTX 1650 interesting due to its relative low cost, power consumption and moderate performance.
I have one of this currently working, processing about 2,5 ACEMD3 test tasks per day.

And I See your 1050 Ti is currently failing all tasks.
I had the same problem with the same card model.
How the problem was solved can be found at this other thread:
http://www.gpugrid.net/forum_thread.php?id=4999

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1332
Credit: 7,085,167,459
RAC: 14,961,402
Level
Tyr
Scientific publications
watwatwatwatwat
Message 53078 - Posted: 23 Nov 2019 | 21:14:51 UTC - in response to Message 53067.

Running a 1060 and quite happy with it.
As an aside, can anyone comment on external GPU's for laptops? My CPU's are idle.

If your laptop offers a Thunderbolt 3 connection, then any number of eGPUs are available.
https://www.anandtech.com/show/15143/gigabyte-aorus-rtx-2080-ti-gaming-box-liquidcooled-tb3-egfx-graphics

KAMasud
Send message
Joined: 27 Jul 11
Posts: 137
Credit: 523,901,354
RAC: 0
Level
Lys
Scientific publications
watwat
Message 53082 - Posted: 24 Nov 2019 | 8:19:11 UTC - in response to Message 53077.

Thank you both.

KAMasud
Send message
Joined: 27 Jul 11
Posts: 137
Credit: 523,901,354
RAC: 0
Level
Lys
Scientific publications
watwat
Message 53083 - Posted: 24 Nov 2019 | 8:20:15 UTC - in response to Message 53078.

It does. Daisy chain allowed or is it too ambitious?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1332
Credit: 7,085,167,459
RAC: 14,961,402
Level
Tyr
Scientific publications
watwatwatwatwat
Message 53094 - Posted: 24 Nov 2019 | 16:53:18 UTC - in response to Message 53083.

From that egpu review, it seems that the gpu maxes out the TB3 connection. I doubt it could support another egpu daisy chained behind the first egpu.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 53182 - Posted: 28 Nov 2019 | 6:24:31 UTC
Last modified: 28 Nov 2019 | 7:06:41 UTC

Has anyone used a "GTX 1650 SUPER" or "GTX 1660 SUPER" on GPUgrid?

Would be interested to see how they compare to the non SUPER models.

Looking to fill in the blanks for performance (Low powered entry level GPUs) on the Initial Test Work Units.

Average completion time guesstimates:
GTX 1650 28200 seconds
GTX 1650 SUPER ?????
GTX 1060 3Gb 26900 seconds
GTX 1660 19500 seconds
GTX 1660 SUPER ?????
GTX 1660 Ti 16500 seconds

(completion times will vary depending on GPU, host and configuration)

Piri1974
Send message
Joined: 2 Oct 18
Posts: 38
Credit: 6,896,627
RAC: 0
Level
Ser
Scientific publications
wat
Message 53188 - Posted: 28 Nov 2019 | 17:31:22 UTC
Last modified: 28 Nov 2019 | 17:47:21 UTC

If you like, I can give you some more data about other "low end" cards for the NEW ACEMD3 application:

These are all recent work units of last week. All were completed without errors.
You may add these to your list if you like.


1) NVIDIA GeForce GTX 745 Maxwell (2048MB 900Mhz DDR3 128-bit) - driver: 430.86:
CPU = AMD FX4320 quad core
Run TIme: 166,385.50 sec = +- 46 hours
See
http://www.gpugrid.net/show_host_detail.php?hostid=512409

2) NVIDIA GeForce GT 710 Kepler (2048MB DDR5 64-bit) - driver: 432.0
CPU = Core 2 Duo E8200 Dual core
Run Time: 375,108.98 sec = +- 104 hours
See
http://www.gpugrid.net/show_host_detail.php?hostid=502519

3) NVIDIA GeForce GT 710 Kepler (2048MB 800Mhz DDR3 64-bit) driver: 436.48
CPU = AMD A8-9600 Quad Core
Run Time = 431,583.77 sec = +- 119 hours
See
http://www.gpugrid.net/show_host_detail.php?hostid=514223

4) NVIDIA GeForce GT 1030 Pascal (2048MB DDR5 64-bit) - driver: 436.48
CPU = Celeron E3300 Dual Core
Run Time = 113,797.23 sec = +- 31 hours
See
http://www.gpugrid.net/show_host_detail.php?hostid=500299

5) NVIDIA GeForce GT 730 Kepler (2048MB DDR5 64-bit) - driver: 419.35
CPU = AMD Athlon 200GE Dual Core, Quad thread
Run time = 233,535.98 sec = +- 64 hours
See
http://www.gpugrid.net/show_host_detail.php?hostid=503400



Be aware of the following points which may have influenced the performance:
- I suppose that the ACEMD3 work units are not 100% identical in the amount of work they represent, so compare the results with a big grain of salt.
- All are Windows 10 machines.
- All computers have different processors.
- At the same time, the CPUs were crunching Mapping Cancer Marker work units, as many as they have cores/threads.
- 1 of my GT710s has DDR5 memory, the other one has DDR3 memory. I have stated it above.
- All GPUs and CPUs run at default speeds. No (manual) overclocking was done.
- None of the CPUs and GPUs ever reached 70 degrees Celsius.
In fact, except for the GT1030, all even stayed well below 60°.



Hope you like it. :-)
Greetings;
Carl Philip

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 53198 - Posted: 29 Nov 2019 | 0:21:03 UTC - in response to Message 53188.

Thanks for your feedback Carl

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1332
Credit: 7,085,167,459
RAC: 14,961,402
Level
Tyr
Scientific publications
watwatwatwatwat
Message 53202 - Posted: 29 Nov 2019 | 5:28:26 UTC

Agree. Kudos Carl for defining what to expect from low end Nvidia cards. Even multi generation old cards are still usable for this project.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53203 - Posted: 29 Nov 2019 | 9:48:44 UTC - in response to Message 53188.

Remember that the deadline is 5 days (120 hours) and the present workunits are quite short, as the fastest card (the RTX 2080 Ti at the moment) can finish them in 1h 42m. It can be 8-12 hours on the fastest cards by definition, so the long workunits can be 4.72~7.08 times longer than the present test workunits. If the workunits will get long enough to hit the 8h on the fastest cards, none of the cards on this list will be fast enough to finish them within the deadline. The deadline won't be extended.
Don't buy low-end GPUs for GPUGrid.
If you don't want to get frustrated by crunching for GPUGRid in the long term, the minimum for this project is a mid-range card (provided that the host is turned on and crunching most of the time): GTX 1660, GTX 1060 and up.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 53204 - Posted: 29 Nov 2019 | 12:58:03 UTC

Looking back at the original post and subsequent comments...the conversation and quest for data and efficiency from GPUs has not changed in 10 years.
Some of the GPUs in the original posts can be considered museum artifacts.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1332
Credit: 7,085,167,459
RAC: 14,961,402
Level
Tyr
Scientific publications
watwatwatwatwat
Message 53206 - Posted: 29 Nov 2019 | 15:14:36 UTC

I think the definitions of "long-run" tasks and "short-run" tasks have gone away with their applications. Now only New ACEMD3 tasks are available and in the future.

So until we see normal research work using that app that takes longer than the current batch, I would say that tasks are falling in the 4-5 hour window on mid to high end gpus.

We are already seeing work not labelled test_Toni but initial_xx and expect normally labelled work to follow shortly.

The switch between tasks may need to be extended out to that original long definition eventually.

Profile JStateson
Avatar
Send message
Joined: 31 Oct 08
Posts: 186
Credit: 3,361,527,550
RAC: 283,434
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53207 - Posted: 29 Nov 2019 | 15:50:21 UTC

I have some stats for the mining type cards: P102-100 and P106-100


CUDA: NVIDIA GPU 0: P106-100 (driver version 440.26, CUDA version 10.2, compute capability 6.1, 4096MB, 3974MB available, 4374 GFLOPS peak)
CUDA: NVIDIA GPU 1: P102-100 (driver version 430.40, CUDA version 10.1, compute capability 6.1, 4096MB, 3964MB available, 10771 GFLOPS peak)



http://www.gpugrid.net/show_host_detail.php?hostid=509037
There are 3 GPUs, units are minutes
Dev# WU count Avg and Std of avg
GPU0 WUs:6 -Stats- Avg:279.7(0.63)
GPU1 WUs:6 -Stats- Avg:258.9(1.46)
GPU2 WUs:6 -Stats- Avg:325.2(0.38)

GPU0 is 1660Ti
GPU1 is P102-100
GPU2 is 1070


http://www.gpugrid.net/show_host_detail.php?hostid=517762

There are 3 GPUs, units are minutes
Dev# WU count Avg and Std of avg
GPU0 WUs:2 -Stats- Avg:474.3(0.46)
GPU1 WUs:2 -Stats- Avg:477.4(0.38) --this should have been the fastest card of the 3
GPU2 WUs:2 -Stats- Avg:497.3(0.59)

GPU0 & 2 are P106-100
GPU1 is eVga 1060 with 6gb

Billy Ewell 1931
Send message
Joined: 22 Oct 10
Posts: 40
Credit: 1,409,873,674
RAC: 6,405,949
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53208 - Posted: 29 Nov 2019 | 15:51:20 UTC - in response to Message 53206.
Last modified: 29 Nov 2019 | 16:26:45 UTC

I think the definitions of "long-run" tasks and "short-run" tasks have gone away with their applications. Now only New ACEMD3 tasks are available and in the future.


@TONI: would you please answer this above assumption. I have my RTX 2080 set for ACEMD3 only and my 2 GTX 1060s set for "Long" and "Short" WUs only. But my 1060s have not received a task in many days.Also why not update the GPUGrid preferences selection options to display reality.I realize this is not the best forum to address the situation but maybe it will be answered anyway. Billy Ewell 1931 (as of last week I can legally boast of being 88 great years of age!)

Life is GOOD.

Piri1974
Send message
Joined: 2 Oct 18
Posts: 38
Credit: 6,896,627
RAC: 0
Level
Ser
Scientific publications
wat
Message 53209 - Posted: 29 Nov 2019 | 16:27:03 UTC - in response to Message 53203.

Remember that the deadline is 5 days (120 hours) and the present workunits are quite short, as the fastest card (the RTX 2080 Ti at the moment) can finish them in 1h 42m. It can be 8-12 hours on the fastest cards by definition, so the long workunits can be 4.72~7.08 times longer than the present test workunits. If the workunits will get long enough to hit the 8h on the fastest cards, none of the cards on this list will be fast enough to finish them within the deadline. The deadline won't be extended.
Don't buy low-end GPUs for GPUGrid.
If you don't want to get frustrated by crunching for GPUGRid in the long term, the minimum for this project is a mid-range card (provided that the host is turned on and crunching most of the time): GTX 1660, GTX 1060 and up.



Thanks for the warning. :-)
I think I wanted to post these in another post about older GPUs, but I need to take a look if I can find it again.

I have a GTX1060 Founders Edition lying around here, but I still need to build a pc around it. :-)
I'll look for a second hand desktop somewhere with a good enough power supply for the GTX1060.
I hope that that card will be good enough for GPUGRID for at least the next 2-3 years.

Cheers;
Carl

Piri1974
Send message
Joined: 2 Oct 18
Posts: 38
Credit: 6,896,627
RAC: 0
Level
Ser
Scientific publications
wat
Message 53210 - Posted: 29 Nov 2019 | 16:29:16 UTC - in response to Message 53202.
Last modified: 29 Nov 2019 | 16:29:47 UTC

Agree. Kudos Carl for defining what to expect from low end Nvidia cards. Even multi generation old cards are still usable for this project.


You are welcome. :-)
I posted these values, most of all because it's always nice for newcomers to have figures which can give you an idea how long your own GPU will need to finish a work unit, especially if it is 'not so new' anymore.

Piri1974
Send message
Joined: 2 Oct 18
Posts: 38
Credit: 6,896,627
RAC: 0
Level
Ser
Scientific publications
wat
Message 53211 - Posted: 29 Nov 2019 | 16:40:59 UTC - in response to Message 53203.

I must say that a deadline of just 5 days is the shortest I have seen in any distributed project.
Especially now that they seem to have dropped the idea of shorter work units too.

A very short deadline and only big work units...

I don't know if it is a good idea,
but why not extend the deadline to 6 days or even a week?
I don't think that people will become lazy if you extend it by just a day or 2.
Of course, I guess this can influence the server and storage performance needs of the project... the servers must be able to follow.

I remember seeing other projects with much longer deadlines.
Although I do know that a longer deadline may or may not be interesting or workable,
depending on the goal of the project.

To give just 2 examples:
The deadline for SETI, isn't that 2 weeks, or a month? Not sure anymore.
And The climate prediction project has even a deadline of 1 year... That is the longest I have seen to date.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1332
Credit: 7,085,167,459
RAC: 14,961,402
Level
Tyr
Scientific publications
watwatwatwatwat
Message 53212 - Posted: 29 Nov 2019 | 20:26:13 UTC - in response to Message 53211.

Depends on the project. Storage limits or the size of the database to hold tasks and work unit results have to be considered.

Depends on the scientists need to gather the input data in a short window for publication or for combining with other scientists data.

Seti has tasks that mostly have a 7 week deadline for normal tasks. Some recent ones only had a two week deadline and AP task have always had a 4 week deadline.

Some projects have deadlines of 300 days or greater.

Piri1974
Send message
Joined: 2 Oct 18
Posts: 38
Credit: 6,896,627
RAC: 0
Level
Ser
Scientific publications
wat
Message 53215 - Posted: 29 Nov 2019 | 22:23:22 UTC - in response to Message 53212.
Last modified: 29 Nov 2019 | 22:25:20 UTC

Depends on the project. Storage limits or the size of the database to hold tasks and work unit results have to be considered.

Depends on the scientists need to gather the input data in a short window for publication or for combining with other scientists data.

Seti has tasks that mostly have a 7 week deadline for normal tasks. Some recent ones only had a two week deadline and AP task have always had a 4 week deadline.

Some projects have deadlines of 300 days or greater.



Thank you Keith!

Thanks to information like this,
I slowly learn how these distributed computing projects work in difficult circumstances:
With a lack of funding, resources and manpower.

Now that I think of it:
These are exactly the cirumstances in which almost all scientists and doctors worldwide need to work.
Because our governments prefer to invest in weapons rather than science and healthcare...
So let us volunteers try to compensate.

Warm greetings from a cold and wintery Brussels ;-)
Carl Philip

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 0
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53217 - Posted: 29 Nov 2019 | 23:26:36 UTC - in response to Message 53211.
Last modified: 29 Nov 2019 | 23:54:31 UTC

but why not extend the deadline to 6 days or even a week?
The reason for this short deadline has been discussed many times.
The work is generated from the previous result, so if the project extends the deadline by 2 days, the whole simulation could take up to (2 * the number of stages in the simulation) days longer.
If you look closely you can distinguish the different parts of a workunit's name:
test159-TONI_GSNTEST3-7-100-RND0962_0
The most obvious is the last digit: it's the number of re-sends.
The "7-100" means it's the 7th stage of the 100 stages.
Every stage is a workunit, the next stage picks up exactly where the previous finished.
If the deadline would be 2 days longer for each stage, the whole series could take up to 200 days longer.
The present 2999 workunits in the queue are actually 299.900 workunits.
Different batches are split up to different stage sizes, depending of how urgent is the actual simulation for the project.
If the present simulation have been split up to only 25 stages, the workunits have been 4 times longer.
Less stages * short deadline = short total runtime
More stages * long deadline = long total runtime

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 401
Credit: 16,754,860,632
RAC: 17,135,862
Level
Trp
Scientific publications
watwatwat
Message 53218 - Posted: 30 Nov 2019 | 0:49:23 UTC - in response to Message 53211.

I must say that a deadline of just 5 days is the shortest I have seen in any distributed project.
At WCG Fighting AIDS at Home 2 is 24 hours and Africa Rainfall is likely to shorten to about 24 hours.
I know it's frustrating but these molecular modelling projects require an enormous amount of work so they need to tighten it up so they can get the results and wet bench them.
____________

Nick Name
Send message
Joined: 3 Sep 13
Posts: 53
Credit: 1,533,531,731
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53221 - Posted: 30 Nov 2019 | 4:18:32 UTC

I think Rosetta has some now and then with a 3 day deadline. At least they have in the past.
____________
Team USA forum | Team USA page
Join us and #crunchforcures. We are now also folding:join team ID 236370!

Rabinovitch
Avatar
Send message
Joined: 25 Aug 08
Posts: 143
Credit: 64,937,578
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53438 - Posted: 6 Jan 2020 | 11:58:22 UTC

Could anybody please tell me whether it worth to buy a RTX 2080Ti GPU for Gpugrid, or may be any other modern but cheaper GPU will look more attractive in terms of сredit per €/$?
____________
From Siberia with love!

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1332
Credit: 7,085,167,459
RAC: 14,961,402
Level
Tyr
Scientific publications
watwatwatwatwat
Message 53439 - Posted: 6 Jan 2020 | 16:48:34 UTC

Probably not. Find someone with a 2080Ti and look at the task times and then find someone with a 2080 Super and look at their task times. Then compare the percentage improvement of the 2080Ti to the percentage increase in cost.
Problem becomes I don't think the scheduler individually identifies the different 2080 models and then all just get lumped together.

Post to thread

Message boards : Number crunching : Credit per € / $

//