Advanced search

Message boards : Graphics cards (GPUs) : 560Ti Owners!

Author Message
Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 27977 - Posted: 9 Jan 2013 | 4:18:19 UTC

What are your current runtimes for long tasks?

At the moment we are suffering a heatwave in Australia and I've
been forced to put my 560Ti in the Third (bottom) slot on my motherboard, which
relegates it to PCIE x4. The Primary 660Ti remains at x8.

I'm wondering how much, if any, that is affecting the runtimes.
my results herehttp://www.gpugrid.net/results.php?userid=83391

obviously the 43000 second runtimes are the 660ti (1124/3024)
and the
63000 second runtimes are the 560ti (911/2005)

This is on a 2600K @ 4.5 with 2 cores free - running 6 instance of Docking@home.
CPU averages about 90% usage with this setup.

I know the current long tasks really are long at the moment. Is 17.5 hours ok for the 560 though? Adding in the 1 hour upload time, that's getting up around 19 hours total. Still comfortably under 24 hours, so I'm not complaining.
At least my cards aren't overheating anymore!

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,790,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27985 - Posted: 9 Jan 2013 | 13:53:41 UTC

17,5 h for the actual patch is a good value on a, i think in your case, 384 core version of the 560 ti.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Crunching for my deceased Dog who had "good" Braincancer..

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27995 - Posted: 9 Jan 2013 | 19:17:00 UTC - in response to Message 27977.

Your GTX560Ti times are better than I would have predicted, suggesting a relative performance improvement for the super-scalar cards.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 27999 - Posted: 10 Jan 2013 | 3:57:12 UTC

yes, it is a Ti card. It's an MSI GTX560 TI 2G GDDR5 Twin Frozr II OC .

Glad it seems to be running on par/above expectations.
has me curious as to what effect PCIE speeds really have with GPUGrid.

Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 28001 - Posted: 10 Jan 2013 | 6:41:34 UTC - in response to Message 27999.

yes, it is a Ti card. It's an MSI GTX560 TI 2G GDDR5 Twin Frozr II OC .

Glad it seems to be running on par/above expectations.
has me curious as to what effect PCIE speeds really have with GPUGrid.


<add> it is also just the 384 core card, not the 448.

Frontiers
Send message
Joined: 12 Sep 10
Posts: 6
Credit: 2,271,307
RAC: 0
Level
Ala
Scientific publications
watwatwat
Message 28023 - Posted: 12 Jan 2013 | 13:20:07 UTC - in response to Message 27977.
Last modified: 12 Jan 2013 | 13:23:33 UTC

Simba123 wrote:
What are your current runtimes for long tasks?
obviously the 43000 second runtimes are the 660ti (1124/3024)
and the
63000 second runtimes are the 560ti (911/2005)


Looks like both GPU's are super-scalars.

I have 48,817.82 seconds for 135150 points WU named "9px26_6-NOELIA_hfXA_long_ligs89-0-2-RND0470_0", all 12 threads of CPU was all the time loaded by F@H SMP client.

My card is 3-slot Asus ENGTX560Ti 448cores at 1730 @ 1.038 Vcore @ 2000 memory, it's GF110 chip.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28035 - Posted: 14 Jan 2013 | 13:08:19 UTC - in response to Message 28023.
Last modified: 14 Jan 2013 | 14:28:12 UTC

Frontiers, what is your GPU clock rate?

You would need to be using <12threads for CPU projects to optimize for GPUGrid; if you starve the GPU of CPU cycles, GPU performance drops significantly.

When you drop to PCIE x4 it's going to make a significant difference for a high end card, but I wouldn't worry too much about dropping to x8, especially for mid-range cards or if your using PCIE3.0. Obviously PCIE x1 is no-go, even for mid-range cards.
Another consideration (for some) is DDR2 vs DDR3. DDR3 is a big improvement.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,790,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28041 - Posted: 14 Jan 2013 | 16:10:28 UTC
Last modified: 14 Jan 2013 | 16:16:18 UTC

I use pcie 1.0 with a 560ti 448 core edition. Works fine in there. :D when i would not stuck with this big uploads and my small upload i would get surely 250k Rac with these noelias
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Crunching for my deceased Dog who had "good" Braincancer..

Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 28061 - Posted: 16 Jan 2013 | 8:06:41 UTC - in response to Message 28041.

I use pcie 1.0 with a 560ti 448 core edition. Works fine in there. :D when i would not stuck with this big uploads and my small upload i would get surely 250k Rac with these noelias



I think that 'you can't use PCIE X1 or X4' has now been proven a misnomer.
Looking at your times and mine, there is no difference between running in the X1 slot and the X16 slot.

Quite surprising actually, but the numbers don't lie. Happy though as it means I can leave my 560Ti in the bottom X4 slot, which means lower temps.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28062 - Posted: 16 Jan 2013 | 11:53:02 UTC - in response to Message 28061.
Last modified: 16 Jan 2013 | 12:05:07 UTC

I think that 'you can't use PCIE X1 or X4' has now been proven a misnomer.
Looking at your times and mine, there is no difference between running in the X1 slot and the X16 slot.

Quite surprising actually, but the numbers don't lie. Happy though as it means I can leave my 560Ti in the bottom X4 slot, which means lower temps.


PCIE1 doesn't mean x1 or x4. It can be x16, x8, x4 or x1. PCIE1 x16 is as fast as PCIE2 x8, and even PCIE3 x4. So PCIE1 isn't an obstacle that will prevent you crunching, it will just slow you down.
How much depends on whether its x16, x8 or x4. Then there is the fact that PCIE1 boards tend to only support DDR or DDR2 memory. This also reduces performance, as does the CPU which would again be limited compared to a top 3rd generation Intel CPU, or your i7-2600K.

So while PCIE2 x4 does work, there will be some performance loss due to the reduced PCIE rates. With a PCIE2 motherboard, you could be using dual or triple channel DDR3 and have a very fast CPU. You also have to consider the operating system performance differences (XP or Linux will outperform W7 by >11%) and don't forget that there is a 448cuda core version of the 560Ti and a 384 (with 256 usable cores) version.

This has been discussed, demonstrated and proven. How accurate the conclusions are is open to debate after ever new application release, but as a general rule when you reduce the PCIE bus rate, performance will drop, and this will be more noticeable for more powerful GPU's.

As you have the 384cuda-core version, I would not be too worried about having it in the PCIE2.0 x4 slot. You might want to check if your GTX660Ti's slot drops to X8 or remains as X16 if the X4 slot is populated. This is board-specific.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

jerome lavigne
Send message
Joined: 14 Feb 12
Posts: 7
Credit: 109,140,520
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28070 - Posted: 17 Jan 2013 | 11:07:04 UTC - in response to Message 27977.

Hi,

i have the same card, msi gtx 560Ti OC twin frozr II, and my time to finish a long run is approximately the same.

I fyour GPU is hot, down your frequency clock with MSI afeterburn. but you would have 5 or 6 hours in more for finnish a long run...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28074 - Posted: 17 Jan 2013 | 19:53:53 UTC - in response to Message 28070.

No, lower the voltage first. That's a free drop in power consumption (=electricity bill), heat and noise as long as it's still stable. Only lower frequency and voltage further is this isn't enough.

MrS
____________
Scanning for our furry friends since Jan 2002

jerome lavigne
Send message
Joined: 14 Feb 12
Posts: 7
Credit: 109,140,520
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28080 - Posted: 19 Jan 2013 | 12:18:14 UTC - in response to Message 28074.
Last modified: 19 Jan 2013 | 12:20:33 UTC

I don't agree with you.

In summer, my GPU past of 79°C to 63°C only with less frequency (880mhz to 440mhz for core clock abd shader 1760 to 880).
with a lower voltage than origin, you may have instability. As little as if your overclock you GPU : when you up frenquency and you have artefact or instabillity, you up voltage.
after, maybe msi afterburner down core voltage, but it's not say by the soft...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28081 - Posted: 19 Jan 2013 | 18:10:52 UTC - in response to Message 28080.

That's why I said "as long as it's still stable" - stability testing is needed when lowering the voltage, but the result is more efficient.

In your example you reduced the frequency by 50% and hence reduce power consumption by approximately 50% (less in case of dominant leakage.. but let's assume a good chip). If you had lowered the voltage as well you could probably go from ~1.0 V to 0.9 V or maybe 0.85 V. Let's play it safe and assume 0.9 V. This reduces power consumption to 0.9^2/1.0^2 = 81%. Now, in order to drop overall power consumption by 50% you'd only need to lower your frequency to 0.5/0.81 = 62% of the stock frequency, i.e. 545 MHz instead of 440 MHz. Your summer-throughput would be 24% higher (545/440) at similar power consumption, temperature etc.

The lower you can drive voltage the more you gain, obviously. However, I'm not sure these chips would start at all at voltages in the range of 0.7 V, even at very low frequencies, or if these could even be set.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2335
Credit: 16,178,080,749
RAC: 1,632
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28085 - Posted: 19 Jan 2013 | 23:21:59 UTC - in response to Message 28062.
Last modified: 19 Jan 2013 | 23:37:01 UTC

I think that 'you can't use PCIE X1 or X4' has now been proven a misnomer.
Looking at your times and mine, there is no difference between running in the X1 slot and the X16 slot.

Quite surprising actually, but the numbers don't lie. Happy though as it means I can leave my 560Ti in the bottom X4 slot, which means lower temps.


PCIE1 doesn't mean x1 or x4. It can be x16, x8, x4 or x1. PCIE1 x16 is as fast as PCIE2 x8, and even PCIE3 x4. So PCIE1 isn't an obstacle that will prevent you crunching, it will just slow you down.
How much depends on whether its x16, x8 or x4. Then there is the fact that PCIE1 boards tend to only support DDR or DDR2 memory. This also reduces performance, as does the CPU which would again be limited compared to a top 3rd generation Intel CPU, or your i7-2600K.

So while PCIE2 x4 does work, there will be some performance loss due to the reduced PCIE rates. With a PCIE2 motherboard, you could be using dual or triple channel DDR3 and have a very fast CPU. You also have to consider the operating system performance differences (XP or Linux will outperform W7 by >11%) and don't forget that there is a 448cuda core version of the 560Ti and a 384 (with 256 usable cores) version.

This has been discussed, demonstrated and proven. How accurate the conclusions are is open to debate after ever new application release, but as a general rule when you reduce the PCIE bus rate, performance will drop, and this will be more noticeable for more powerful GPU's.

I remember that discussion. That discussion motivated me to upgrade from Core 2 Quad to Core i7-870 and 970. While your every statement is still true, it seems to me from my experience that the CUDA 4.2 client could be throttled down far less by such factors as PCIe bandwith than the CUDA 3.1 client was (at least some type of workunits). One of my Core i7-970s and its motherboard has failed recently, and I could replace it only with a Core 2 Duo 6700 (@stock clock) in an Intel D975XBX motherboard. It has 3 PCIe (1st gen.) x16 slots, two of them share the same x16 lane (so they become x8 when both are populated) and the 3rd slot is PCIe x4 only. I have two GTX 480s @800MHz in slot 1 (@x16) and in slot 3 (@x4). Taking the running times into consideration it's indistinguishable which workunit was processed on which card. Even if you compare them with my other host's (with similar GPU, but Core i7-870 @3.85GHz) running times, there is only about 5-7% improvement.
I have another experimental host with a GTX480@800MHz. Originally it had a Pentium D 2.8GHz CPU in an Intel DQ965GF motherboard (PCIe x16 1st gen.), but I recently upgraded its CPU to a Core 2 Duo 6600. The results before this upgrade are still on the list of this host's page. I haven't experienced any decrease in the running times after upgrading the CPU (while I can recall there was such decrease in the times of the CUDA 3.1 client). I know that the GTX480 is not the fastest GPU any more, so to make things more clear, I've put a GTX670 into this host. It's results will come in the following days.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28088 - Posted: 20 Jan 2013 | 11:00:06 UTC - in response to Message 28085.

I would expect the CPU speed requirement and PCIe bandwidth (and latency) to become less the more complex the WUs are and the slower a GPU is. And I wouldn't be surprised if the current long runs were simulating rather complex molecules.. whatever wasn't possible previously due to a lack of computing power.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2335
Credit: 16,178,080,749
RAC: 1,632
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28089 - Posted: 20 Jan 2013 | 15:17:31 UTC - in response to Message 28085.

I have another experimental host with a GTX480@800MHz. Originally it had a Pentium D 2.8GHz CPU in an Intel DQ965GF motherboard (PCIe x16 1st gen.), but I recently upgraded its CPU to a Core 2 Duo 6600. The results before this upgrade are still on the list of this host's page. I haven't experienced any decrease in the running times after upgrading the CPU (while I can recall there was such decrease in the times of the CUDA 3.1 client). I know that the GTX480 is not the fastest GPU any more, so to make things more clear, I've put a GTX670 into this host. It's results will come in the following days.

It's finished the last NOELIA_hfXA_long_ligs89 in 38.746 seconds (10h 45m 46s). The GTX480@800MHz processed it to 5.2%, and the GTX670 processed the rest (94.8%). There is no significant difference in the running times compared to my other host with a Core i7-870, and two GTX670s (this MB has two real x16 PCIe 2.0 slots). I have to mention that this host consumes 105W less with the GTX670@1084MHz than with the GTX480@800MHz.
The next workunit on my experimental host was a TONI_AGGd8 completed in 20.637 seconds (5h 44m), it took the same long as it will take on my other host (it's at 20% atm).

Post to thread

Message boards : Graphics cards (GPUs) : 560Ti Owners!

//