Advanced search

Message boards : Graphics cards (GPUs) : GTX580 specifications

Author Message
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1861
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19145 - Posted: 31 Oct 2010 | 18:44:01 UTC

Not confirmed.
http://vr-zone.com/articles/nvidia-geforce-gtx-580-specifications-leaked/10184.html

gdf

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19159 - Posted: 1 Nov 2010 | 0:55:13 UTC - in response to Message 19145.

This may just be a paper launch to deflect away from ATI’s 6000 series cards, due out later in Nov. Even the change to the 500 series suggests they want people to think they have released a new series of card. I think it will be a while (Jan) until we see how these perform on the new ACEMD 3.2 app.

If the reportedly leaked specs (512 cores, 1544MHz shaders and 4Gbps data rate) are correct it should be about 18% faster than a reference GTX480. It is also reported to be cooler and quieter than the original GTX480, and I will buy that story as I noticed my second GTX470 was slightly cooler and quieter, due to using lower voltages. May have stemmed from improved crystallization, silicon refinement, engraving methods, energy mapping... They have had 9 months to improve manufacturing methods.
The suggestion seems to be that some factory OC versions may eventually appear, pushing those GTX580's to about 25% faster than the present GTX480.
However, with limited or no availability and high prices these are well out of most people’s reach.
Fortunately there are other Fermi's to choose from and their prices are falling. A GTS455 should also appear on the Fermi list soon.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2450
Credit: 152,691,441
RAC: 167,050
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 19167 - Posted: 1 Nov 2010 | 9:07:39 UTC

So far it loks like the same architecture (which is fine) with a general overhaul regarding manufacturing, improving speed and/or leakage (that's how they manage slightly higher clocks at slightly reduced power consumption) and reliability (otherwise they still couldn't provide a 512 SP part). TSMCs 40 nm process should be almost mature by now and nVidia should have had enough experience to handle it.
In my opinion these rumors look quite plausible.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19283 - Posted: 6 Nov 2010 | 0:08:54 UTC - in response to Message 19167.

GTX 580 Specs:
Full complement of GPU cores @ 700MHz
512 Cuda Cores (Shaders) @ 1400MHz
1.5GB of GDDR5 @ 4GHz
128 TMUs (needed)
GF110
244W
290mm (11.5in) long...
An 8-pin and a 6-pin power supply
Release date 8th or 9th Nov; not that you will be able to get your hands on one, even if you do have a spare £500+.

http://vr-zone.com/articles/report-nvidia-geforce-gtx-580-tdp-is-244w-includes-128-tmu-benchmarks-leaked/10202.html

Also noticed this,
ENGTX470/2DI/1280MD5/V2.

I expect they will also release a GF110 based GTX470. Might explain why you can pick up a GTX470 for £190 and makes sense; not every chip is going to make the GTX580 cut.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2450
Credit: 152,691,441
RAC: 167,050
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 19297 - Posted: 6 Nov 2010 | 13:41:09 UTC - in response to Message 19283.
Last modified: 6 Nov 2010 | 13:41:22 UTC

That looks like everything the GTX480 should have been :)

And if nVidia wanted at least to try a little bit to put some consistency into their naming scheme they'd call a GF110 based "GTX470" a GTX570. The savings in power consumption alone should justify it, let alone the improved gaming performance due to more TMUs. But then consistent naming is not exactly nVidias strength.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19354 - Posted: 8 Nov 2010 | 23:22:56 UTC - in response to Message 19297.

Techpowerup are giving these preliminary specs,
512shaders, 48 ROPs, GF110, 1536MB, 384bit,
Transistors – 3000M (is this down from 3200M)?
772MHz core (that’s a bit more like it),
pushing the shaders up to a modest 1544MHz.
1002MHz memory clock (not bad).

I hope Techpowerup are closer to the mark.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2450
Credit: 152,691,441
RAC: 167,050
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 19358 - Posted: 9 Nov 2010 | 9:11:16 UTC - in response to Message 19354.

They also said "about 3 billion" transistors for GF100 prior to the launch, so I wouldn't read more than "between 3.2 and 3.49 billion" into this. Apart from that and the clock speeds there's not much difference between these sources, or did I miss anything?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19359 - Posted: 9 Nov 2010 | 10:38:17 UTC - in response to Message 19358.

The "gone" pre-review did say 3.0M, and that they trimed some leakage.
It uses a vapor-chamber heatplate technology,
It uses DDR5 that can run up to 5000 MHz
The card’s power draw is limited to 300W, so Furmark and OCCT cannot take it past this, which might be an issue for high overclockers - hopefully not for crunching on GPUGrid.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19398 - Posted: 10 Nov 2010 | 0:17:02 UTC - in response to Message 19359.
Last modified: 10 Nov 2010 | 16:52:40 UTC



The GTX580 is 15.4% faster for crunching CUDA tasks than the GTX480, as is.

£380 to £430

Review by Ryan Smith,
http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/16

There are overclocked versions available too,

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2450
Credit: 152,691,441
RAC: 167,050
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 19434 - Posted: 11 Nov 2010 | 21:42:17 UTC

The memory chips are probably at least as high-quality as the previously used ones. But the memory controller is the same, and that's the one who's limiting the OC here.

And the throtteling seems to be based on purely on software and to be non-transparent, i.e. the card still reports normal clocks and voltages. A software-implementation means that it's not going to work for unknown workloads (=BOINC), which is probably good for us.
Judging by Anands power consumption measurements the throttling actually forces it below the power draw of heavy games (Crysis). Is it throttled down to its TDP of 250W and the game causes a higher load? I don't know yet and I really dislike that nVidia is not doing this transparently or implementing a hardware protection for the VRMs like ATI did. That one would actually benefit us, whereas the current solution only benefits nVidias marketing.

Regarding transistor count: they improved z-culling and enhanced the TMU capabilities and otherwise didn't change any logic that I know of. So usually transistor count should have gone up a little, not down. And I'm not going to give them the benefit of the doubt here: rest assured that if the transistor count was actually 3.0 billion, they'd say so rather than "3 billion" ;)

MrS
____________
Scanning for our furry friends since Jan 2002

STE\/E [BlackOps]
Send message
Joined: 18 Sep 08
Posts: 348
Credit: 250,952,560
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 19571 - Posted: 20 Nov 2010 | 14:11:39 UTC
Last modified: 20 Nov 2010 | 14:48:07 UTC

I just started running a couple of the new GTX 580's here, they haven't finished any Wu's yet though. The Wu's that are finished already on that Host are from a couple GTX 460's that were in the Box.

Their EVGA GTX 580 Black Ops Editions with a Stock Clocking of 1.088 Volt's, 797 Core, 1594 Shader & 2025 Memory. I have them running now @ 1.088 Volt's, 875 Core, 1750 Shader & 2150 Memory.

I could go higher with a Voltage Tweak but they run Hot enough already @ 80c-88c with about a 70%-75% Fan Speed on AUTO. I don't want to up the Voltage or the Fan Speeds so if they finish the Wu's at those clock speeds that's fine with me.

Their in a i7 920 Box that's clocked to 3.80GHZ on Air, Enermax 850 85+ PSU... :)

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 19572 - Posted: 20 Nov 2010 | 17:41:38 UTC - in response to Message 19571.

It's great to have someone crunching with the new top GPU here so soon!
A piece of advice: You should dedicate one CPU core to each GPU (using the SWAN_SYNC=0 environmental setting) to achieve maximum performance of the GTX580s. Right now your GTX580s are performing a litte less than my overclocked GTX480s. Note that their temperatures will rise if you apply the settings I've suggested, and maybe you have to raise the fan speed to keep the GPUs stable.

STE\/E [BlackOps]
Send message
Joined: 18 Sep 08
Posts: 348
Credit: 250,952,560
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 19575 - Posted: 20 Nov 2010 | 20:53:18 UTC

I tried them but didn't like what I was seeing, my CPU Temps jumped by 10c-13c so that's a no no. I turned HT back off but am still running the SWAN_SYNC=0 to see what happens. Looks like each GPU is using 12%-25% of a CPU Core when it needs to that way ...

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19576 - Posted: 20 Nov 2010 | 21:21:31 UTC - in response to Message 19575.

Run the GPU's at reference frequencies (stock), and keep the fans high.

STE\/E [BlackOps]
Send message
Joined: 18 Sep 08
Posts: 348
Credit: 250,952,560
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 19577 - Posted: 20 Nov 2010 | 21:49:37 UTC - in response to Message 19576.

Run the GPU's at reference frequencies (stock), and keep the fans high.


The Temp's on the GPU didn't go up any, it was the PU that took a Jump. The i7 920's don't like anything over 4 Cores I've found, at least not at 3.80GHZ anyway on Air

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 746
Credit: 818,853,967
RAC: 2,298,596
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwat
Message 19581 - Posted: 21 Nov 2010 | 12:08:57 UTC - in response to Message 19575.

I tried them but didn't like what I was seeing, my CPU Temps jumped by 10c-13c so that's a no no. I turned HT back off but am still running the SWAN_SYNC=0 to see what happens. Looks like each GPU is using 12%-25% of a CPU Core when it needs to that way ...

This is interesting, a test of SWAN_SYNC on the very fastest NVidia GPU. Looking at your results there are 2 WU types that completed with SWAN_SYNC both on and off. On one type the speedup was 8% and on the other 21%. It would be interesting to see results with SWAN_SYNC off and the priority boosted to High or even just to Above Normal. The GPUGRID process can be boosted automatically with a program such as eFMer Priority 64:

http://www.efmer.eu/boinc/download.html

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2450
Credit: 152,691,441
RAC: 167,050
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 19582 - Posted: 21 Nov 2010 | 12:26:26 UTC - in response to Message 19577.

The i7 920's don't like anything over 4 Cores I've found


Well.. I'd say that's because HT actually increases hardware utilization. There's no free lunch here, though: throughput at the same clock speed increaes, but you have to pay for it in terms of energy.

Running GPU-Grid it's a little different, as the CPU is not directly doing any crunching, it's just asking the GPU "are you done yet?" all the time.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1861
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 19583 - Posted: 21 Nov 2010 | 13:56:13 UTC - in response to Message 19577.

This is the fastest result ever seen on GPUGRID 3.9 ms/step on the DHFR workunit:
http://www.gpugrid.net/result.php?resultid=3326415

However, it should be able to get to 3.3 ms/step (just guessing, as we don't have any GTX580 yet). Maybe, boosting the priority as suggested.

gdf

Run the GPU's at reference frequencies (stock), and keep the fans high.


The Temp's on the GPU didn't go up any, it was the PU that took a Jump. The i7 920's don't like anything over 4 Cores I've found, at least not at 3.80GHZ anyway on Air

STE\/E [BlackOps]
Send message
Joined: 18 Sep 08
Posts: 348
Credit: 250,952,560
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 19596 - Posted: 22 Nov 2010 | 0:54:48 UTC

Okay, I'll try some of the ideas out when I bring the 580 back over here to GPU Grid, right now I'm running the PrimeGrid Sieve Wu's with it ...

Jeroen
Send message
Joined: 26 Nov 10
Posts: 9
Credit: 12,382,901
RAC: 3
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 19689 - Posted: 27 Nov 2010 | 5:21:34 UTC - in response to Message 19596.

Hello -

I am new here with GPUGRID.net. I just finished my first WU via the 580. Run time 9176 sec. I am not sure how good that is or not. The card is an EVGA SC. I have it clocked at 797/1594 currently but the card runs stable at 900 MHz stock voltage with a bit warmer temp.

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 19690 - Posted: 27 Nov 2010 | 9:46:01 UTC - in response to Message 19689.
Last modified: 27 Nov 2010 | 9:47:30 UTC

I just finished my first WU via the 580. Run time 9176 sec. I am not sure how good that is or not.

This is good. If you want maximum performance, you should dedicate one CPU core to feed the GPU. Use the SWAN_SYNC=0 environmental setting to achieve this. (control panel -> system -> advanced tab -> environment variables -> new (system variables) -> variable name: SWAN_SYNC value: 0 -> restart windows)

Jeroen
Send message
Joined: 26 Nov 10
Posts: 9
Credit: 12,382,901
RAC: 3
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 19702 - Posted: 27 Nov 2010 | 22:08:27 UTC - in response to Message 19690.

I just finished my first WU via the 580. Run time 9176 sec. I am not sure how good that is or not.

This is good. If you want maximum performance, you should dedicate one CPU core to feed the GPU. Use the SWAN_SYNC=0 environmental setting to achieve this. (control panel -> system -> advanced tab -> environment variables -> new (system variables) -> variable name: SWAN_SYNC value: 0 -> restart windows)


Thanks for the tip. I just added this environment variable.

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 19899 - Posted: 14 Dec 2010 | 12:16:51 UTC - in response to Message 19583.

This is the fastest result ever seen on GPUGRID 3.9 ms/step on the DHFR workunit:
http://www.gpugrid.net/result.php?resultid=3326415

However, it should be able to get to 3.3 ms/step (just guessing, as we don't have any GTX580 yet). Maybe, boosting the priority as suggested.

gdf

How about 3.747 ms / step?

I paste it here, before it'll be gone:

<core_client_version>6.10.58</core_client_version>
<![CDATA[
<stderr_txt>
# Using device 0
# There are 2 devices supporting CUDA
# Device 0: "GeForce GTX 580"
# Clock rate: 1.61 GHz

# Total amount of global memory: 1610153984 bytes
# Number of multiprocessors: 16
# Number of cores: 128
# Device 1: "GeForce GTX 480"
# Clock rate: 1.40 GHz
# Total amount of global memory: 1610285056 bytes
# Number of multiprocessors: 15
# Number of cores: 120
SWAN: Using synchronization method 0
MDIO ERROR: cannot open file "restart.coor"
# Time per step (avg over 2000000 steps): 3.747 ms
# Approximate elapsed time for entire WU: 7494.516 s
called boinc_finish

</stderr_txt>
]]>


I think it's very hard to reach 3.3 ms/step, because my GTX 580 runs at 95-96% GPU usage already. Maybe we can gain another 0.1-0.2 ms/step with heavy overclocking of the GPU (some bigger GPU cooler required to do this and maybe the new Core i3, i5, i7 processors, wich will be released in 2011/Q1).

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19906 - Posted: 14 Dec 2010 | 16:32:44 UTC - in response to Message 19899.

Your 3.747 is the fastest result I have seen, and I did check a few systems.

You are probably correct in that if you had a faster system you might see slightly faster results and higher GPU utilization. To get right down to 3.3 (12% faster) you might also need an X64 system and to do no CPU work at all.
If you use GPUZ or one of the overclocking tools you could check to see if freeing up additional CPU cores makes any difference; if the GPU utilization rises you might want to run one task like that just to see what the optimum performance is on your system.

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 19945 - Posted: 16 Dec 2010 | 8:00:08 UTC - in response to Message 19906.
Last modified: 16 Dec 2010 | 8:01:00 UTC

3.614 ms/step

It's happened overnight, so I couldn't watch the GPU usage.

<core_client_version>6.10.58</core_client_version>
<![CDATA[
<stderr_txt>
# Using device 0
# There are 2 devices supporting CUDA
# Device 0: "GeForce GTX 580"
# Clock rate: 1.70 GHz

# Total amount of global memory: 1610153984 bytes
# Number of multiprocessors: 16
# Number of cores: 128
# Device 1: "GeForce GTX 480"
# Clock rate: 1.40 GHz
# Total amount of global memory: 1610285056 bytes
# Number of multiprocessors: 15
# Number of cores: 120
SWAN: Using synchronization method 0
MDIO ERROR: cannot open file "restart.coor"
# Time per step (avg over 2000000 steps): 3.614 ms
# Approximate elapsed time for entire WU: 7227.063 s
called boinc_finish

</stderr_txt>
]]>

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19949 - Posted: 16 Dec 2010 | 10:57:21 UTC - in response to Message 19945.

Excellent, you just shaved off another 0.133 ms/step, raised the bar a notch, and showed that the GTX580 can operate the most demanding tasks (GIANNI_DHFR1000) while overclocked by 10.7%. A very good card.

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 19953 - Posted: 16 Dec 2010 | 11:36:16 UTC - in response to Message 19945.
Last modified: 16 Dec 2010 | 12:05:56 UTC

I've made a little calculation:
I've rised the GPU's clock 6.25% (1.7GHz/1.6GHz=1.0625)
The average time/step went down 3.55% (3.614ms/3.747ms=0.9645) in other words the processing is 3.68% faster (3.747ms/3.614ms=1.0368)
So these numbers are showing that other components really limiting the processing speed (as I supposed it before). If they weren't limiting it, the average time/step should be now 3.527ms (=1.6GHz/1.7GHz*3.747ms).
Taking the acceleration lost in account, to achieve 12% performance boost to reach 3.3ms/step, the GPU have to be overclocked 20.4% (6.25/3.68*12=20.38) (1.9264GHz shader and 963.2MHz core frequency) or more (because the other limiting factors could be higher at higher GPU clocks). I won't try this while I don't have some huge GPU cooler on my GTX 580. (In January I'll have)
But I have a faster CPU in my other PC, so I could swap the GPUs, or the CPUs, to find out how much help a faster CPU is. But to shut down both of my PCs at the same time for an hour (or longer), is a really hard thing to do.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19954 - Posted: 16 Dec 2010 | 12:27:11 UTC - in response to Message 19953.
Last modified: 16 Dec 2010 | 12:53:29 UTC

Your 4% loss from 1h shutdown would be offset by a day or two's increased performance ;)

Presently my GTX470's are only OC'd by 8% and run the GIANNI_DHFR1000 tasks without failure (5.325ms per step). So your GTX580 is 47% faster. At 19% GPU OC these tasks failed on my GTX470, but the other tasks ran well. So if you can successfully run GIANNI_DHFR1000 tasks the rest should also run. They make for good stability test work units

PS. From reference shader freq. (1536MHz), your card is overclocked by 10.7%

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2450
Credit: 152,691,441
RAC: 167,050
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 19982 - Posted: 17 Dec 2010 | 22:15:23 UTC

Could well be that your GPU utilisation dropped a little upon further OC'ing. Or that GPU memory is holding you back. Generally I would have expected a much higher improvement going from 1.6 to 1.7 GHz.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 19990 - Posted: 18 Dec 2010 | 13:52:43 UTC
Last modified: 18 Dec 2010 | 13:53:52 UTC

I've removed my PC from it's case in order to make a new ventilator inlet at the bottom (the lower GPU cooler could not breath in enough air, because it's right next to the bottom of the case). The temps vent down of course, so I've raised the GTX580's clock to 900MHz. It has failed a couple of tasks, so I've raised the GPU's voltage by 13mV (to 1.063V). It's seems to be ok since then. It's completed a _KASHIF_HIVPR_n1_unbound_ task in 2 hours (7.203 ms/step). The max temp was 65°C at 95% GPU usage. (standard GTX580 cooler @78% plus a 12cm cooler placed right to the top of the two cards) I hope I'll receive a GIANNI_DHFR task soon.

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 20002 - Posted: 22 Dec 2010 | 19:36:47 UTC

I've set a new world record in crunching GIANNI_DHFRs: 3.441ms/step

<core_client_version>6.10.58</core_client_version>
<![CDATA[
<stderr_txt>
# Using device 0
# There are 2 devices supporting CUDA
# Device 0: "GeForce GTX 580"
# Clock rate: 1.80 GHz

# Total amount of global memory: 1610153984 bytes
# Number of multiprocessors: 16
# Number of cores: 128
# Device 1: "GeForce GTX 480"
# Clock rate: 1.60 GHz
# Total amount of global memory: 1610285056 bytes
# Number of multiprocessors: 15
# Number of cores: 120
SWAN: Using synchronization method 0
MDIO ERROR: cannot open file "restart.coor"
# Time per step (avg over 2000000 steps): 3.441 ms
# Approximate elapsed time for entire WU: 6882.250 s
called boinc_finish

</stderr_txt>
]]>

I think my CPU is also part of this success. I'm using a C2Q 9650 @ 4.25GHz (it's running at this speed at standard CPU voltage)
BOINC reports the Measured floating point speed 4438.87 million ops/sec
Some WUs failing on this GTX 580 @ 900MHz, so I think I will lower the GPU clock in the future.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2450
Credit: 152,691,441
RAC: 167,050
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 20008 - Posted: 22 Dec 2010 | 23:56:31 UTC - in response to Message 20002.

"How fast do you want to get the wrong result?" is not the question we're trying to answer ;)
Anyway, very nice scores!

MrS
____________
Scanning for our furry friends since Jan 2002

STE\/E [BlackOps]
Send message
Joined: 18 Sep 08
Posts: 348
Credit: 250,952,560
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 20010 - Posted: 23 Dec 2010 | 8:43:22 UTC

Records are made to be Broken at any Cost ... lol

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20011 - Posted: 23 Dec 2010 | 11:14:51 UTC - in response to Message 20010.

Perhaps the 3.441ms/step record will fall when someone uses a Sandy Bridge CPU on Linux, with some light OC'ing.

kts
Send message
Joined: 4 Nov 10
Posts: 21
Credit: 25,973,574
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 20033 - Posted: 25 Dec 2010 | 17:18:21 UTC

EVGA has a 2-slot water cooled 580 out, street price in Akihabara, Tokyo is 70,000yen or over $700, a ¥20,000/$200 premium.

FYI, courtesy of google translation:

Water-cooled GeForce GTX card debut for models 580, EVGA's "GeForce GTX 580 FTW Hydro Copper 2 (015-P3-1589-KR)" was released.

 In addition to standard have been overclocked, it also features one slot is a card thickness. Street price is around 70,000 yen (see " new products this week found "for details.)

 This product is a liquid cooling system as standard GeForce GTX 580 Video Card. Are covered by a copper cooling head chrome plating across the board, PCI Express x16 connector on the other side of the fitting are also provided.

 Best of all, the point has been slimmed down one slot from the normal models for two-slot card thickness. Despite having to take a water cooling system is introduced, and likely choice for those who want to save expansion slot space.

 The main specifications, the installed memory GDDR5 1,536 MB, the core clock is 850MHz, shader 1,700 MHz, Memoridetareto 4,196 MHz (typically core 772MHz, shader 1,544 MHz, the memory 4,008 MHz), equipped with video terminals and Mini HDMI DVI ( group 2), card size is 111.15 × 266.7mm. The external power supply terminal is equipped with one group for each 6-pin and 8-pin type like normal models.


□ GeForce GTX 580 FTW Hydro Copper 2 (EVGA)

http://www.evga.com/products/moreinfo.asp?pn=015-P3-1589-KR

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20035 - Posted: 25 Dec 2010 | 17:47:32 UTC - in response to Message 20033.

It looks really good, and should give good performance, but the connectors are vertically mounted, meaning that the tubing will protrude into the next slot. Pity they did not top/rear mount the connectors. pic

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20108 - Posted: 4 Jan 2011 | 21:51:06 UTC

I just installed a new GTX 580 and ran a task to see how it performs.

Any idea why this task took so long? I have a second in progress now, and it looks like it will take the same amount of time. GPU load is ~98%. SWAN_SYNC=0. I have preferences set so that BOINC is using only 7 of 8 CPU cores. What am I doing wrong?

One thing that sticks out is the difference between the value for "Run time", and the value for "Approximate elapsed time for entire WU". What could cause those two values to be so different?

http://www.gpugrid.net/result.php?resultid=3522674

Run time 62752.374998 (17 hours)
CPU time 62631.21
SWAN: Using synchronization method 0
# Time per step (avg over 575000 steps): 21.622 ms
# Approximate elapsed time for entire WU: 27027.548 s (7.5 hours)

____________
Dublin, California
Team: SETI.USA

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20111 - Posted: 4 Jan 2011 | 23:36:14 UTC - in response to Message 20108.

Hi zombie67, that is FAR too slow for a GTX580. Something is very wrong!

Check your available RAM and HDD usage, just in case.

Right click on the desktop, click NVidia control panel, Manage 3D settings, Global Settings, Power management mode, Prefer Maximum Performance. Restart the system.

If you are running some very CPU intensive apps consider freeing another CPU thread; some CPU tasks want more CPU time than they can get, so in Boinc check the difference between CPU time and elapsed time, in case you are running some very hungry CPU tasks.

Did you install the driver after the card and then do a restart?

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20112 - Posted: 4 Jan 2011 | 23:52:54 UTC - in response to Message 20111.
Last modified: 5 Jan 2011 | 0:17:12 UTC

Hi zombie67, that is FAR too slow for a GTX580. Something is very wrong!

Check your available RAM and HDD usage, just in case.


Using only 1gb (of 8gb) of RAM. 70gb of 146gb free disk space.

Right click on the desktop, click NVidia control panel, Manage 3D settings, Global Settings, Power management mode, Prefer Maximum Performance. Restart the system.


I tried that, but it is not possible. On the Global settings tab, power management is not listed at one of the options. On the Program Settings tab, there are two problems: 1) you have to select the Program. BOINC/CUDA/GPUGRID/ACEMD2 none of these are listed. 2) Even if it was listed, the only option choice under Power Management is "Adaptive". In any case, according to GPU-Z, the GPU is not being throttled. It is running at full speed, and full load.

If you are running some very CPU intensive apps consider freeing another CPU thread; some CPU tasks want more CPU time than they can get, so in Boinc check the difference between CPU time and elapsed time, in case you are running some very hungry CPU tasks.


I actually ran most of that task using only 6 of 8 CPU cores. There was plenty of idle CPU cycles.

Did you install the driver after the card and then do a restart?


Yes. Before installing the card, I uninstalled the old card driver, then ran driver sweeper, then installed the latest driver from nVidia's site (263.09). After each step, I rebooted.

FWIW, on other CUDA projects, the card runs at equivalent speeds to other machines with the GTX 580. It is only this project where I am having this problem.

Edit: I think the differing values for CPU time are a clue. I just don't know enough about the app to understand what it means.
____________
Dublin, California
Team: SETI.USA

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20115 - Posted: 5 Jan 2011 | 7:26:30 UTC - in response to Message 20112.

I still think the card is throttling back.

The CPU time/ Run Time is consistant with the use of swan_sync=0 - this is fine.

Although your second task was faster 16ms per step, this is still wellshort of what it should be; a similar task on one of my GTX470's takes about 14ms per step at ref speeds, and I have seen 12ms per step for a GTX580 on Win7 (which is slower).

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20116 - Posted: 5 Jan 2011 | 7:28:19 UTC - in response to Message 20115.

I still think the card is throttling back.

The CPU time/ Run Time is consistant with the use of swan_sync=0 - this is fine.

Although your second task was faster 16ms per step, this is still wellshort of what it should be; a similar task on one of my GTX470's takes about 14ms per step at ref speeds, and I have seen 12ms per step for a GTX580 on Win7 (which is slower).


Okay. Solution? And why only on this project?
____________
Dublin, California
Team: SETI.USA

kts
Send message
Joined: 4 Nov 10
Posts: 21
Credit: 25,973,574
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 20117 - Posted: 5 Jan 2011 | 10:47:11 UTC - in response to Message 20116.

What monitoring/tuning software tools are you using? I have a generic GTX570 and use

MSI Afterburner for overclocking and fan speed control

TechPowerUp GPU-Z for all data: clocks, V,W,A loads, temp, bios/driver version, etc.

nVidia control panel - to set max. performance, multi-monitors, etc.

Using GPU-Z all looks well for my GTX570, everything ID'd properly, GPU chip GF110, Rev. A1, Release date Dec 07, 2010, Bios ver. 70.10.17.00.03, Shaders 480 unified, bandwidth 152Gb, driver version 8.17.12.6309 (Forceware 263.09) Win7 64, etc... How about you, any evidence of problems? Primitive Bios, broken memory/shaders/ROP's recognized, half-speed core/memory/shader clocks? PCIE 2.0x16 @x16 2.0? Proper voltages (maybe the card/external 6+2 12V power plugs only LOOKS like they are making proper contact and the rated current/Amp from the PSU just isn't getting there for full operation? total guesswork here) FWIW, my 570 is pulling about 33A and 31W @57C temp with the fan near 3,000rpm at 70%.

Looking at your last two tasks, it seems the time dropped by more than half... 62,000 to 27,000 ... is it fixed, what was the issue/solution? But those timesteps look more like my GTX460 than the GTX570 at 13-16,000

Keep on this, your card should do way better than 27,000 timestep, that is like 1/2 of what it should be, and I am curious the issue. I know in my case it would be something careless like forgetting to plug in the necessary PCIE power cables.

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20123 - Posted: 5 Jan 2011 | 23:05:42 UTC - in response to Message 20116.

Try the latest Beta driver; GPUZ might be misreporting the actual clocks.

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20124 - Posted: 5 Jan 2011 | 23:11:07 UTC

I now suspect the PSU. I have a new one on order, and will give it another shot when it arrives. Stay tuned.
____________
Dublin, California
Team: SETI.USA

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20125 - Posted: 6 Jan 2011 | 7:11:43 UTC - in response to Message 20124.
Last modified: 6 Jan 2011 | 7:13:34 UTC

Did you check the GPU temps, as kts suggested? You may need to up the fan speed.
These GPUs throttle back if they are running too hot or if they are overclocked too much, but still report the clock speed as configured (rather than actual speed). If the PSU was insufficient the card might do something similar, reduce speed or disable some shaders.

STE\/E [BlackOps]
Send message
Joined: 18 Sep 08
Posts: 348
Credit: 250,952,560
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwat
Message 20126 - Posted: 6 Jan 2011 | 9:17:10 UTC - in response to Message 20125.

Did you check the GPU temps, as kts suggested? You may need to up the fan speed.
These GPUs throttle back if they are running too hot or if they are overclocked too much, but still report the clock speed as configured (rather than actual speed). If the PSU was insufficient the card might do something similar, reduce speed or disable some shaders.


I run all my GTX Box's on AUTO Fan and haven't noticed any of them Throttle Back yet ...

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20127 - Posted: 6 Jan 2011 | 10:05:50 UTC - in response to Message 20126.

Manually raising the fan speed increases noise, but reduces temperatures in the card and the system. This increases longevity and may reduce power consumption – hot cards leak more energy so they need more Amps.
Having good airflow in the case also helps. A card in a case with poor airflow would be more likely to overheat/throttle back.
If the card is throttling back due to a PSU problem this is a good thing; before these Fermi's systems just crashed if the PSU was not up to the job.

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20128 - Posted: 6 Jan 2011 | 14:55:21 UTC

The speeds are stock and the fans on auto. Temps are 68-70c.
____________
Dublin, California
Team: SETI.USA

kts
Send message
Joined: 4 Nov 10
Posts: 21
Credit: 25,973,574
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 20131 - Posted: 7 Jan 2011 | 1:26:24 UTC - in response to Message 20128.

We've discussed PSU, overheating, throttling, optimizing driver, RAM, CPU usage, etc. You say the card works fine(like a working GTX580 should) on other projects but not here... total speculation, but could the access pattern of GPUGRID be finding bad video RAM causing Error Correction that just doesn't happen with other projects? What benchmark/testing tools are there to verify whether a video card is operating properly in hardware? If not hardware, then software? What is the proper troubleshooting sequence for a video card?

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20132 - Posted: 7 Jan 2011 | 1:37:20 UTC - in response to Message 20131.

We've discussed PSU, overheating, throttling, optimizing driver, RAM, CPU usage, etc. You say the card works fine(like a working GTX580 should) on other projects but not here... total speculation, but could the access pattern of GPUGRID be finding bad video RAM causing Error Correction that just doesn't happen with other projects? What benchmark/testing tools are there to verify whether a video card is operating properly in hardware? If not hardware, then software? What is the proper troubleshooting sequence for a video card?


Correction. It ran the other project at top speed for a few test tasks. Running over night and the same slow down became obvious. That is why I am suspecting the PSU...even though GPUZ says it is running full out. I am suspecting not.
____________
Dublin, California
Team: SETI.USA

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20133 - Posted: 7 Jan 2011 | 7:47:57 UTC
Last modified: 7 Jan 2011 | 7:57:40 UTC

Ah! Looks like I was on the right track with the PSU.

http://www.gpugrid.net/result.php?resultid=3534780

Normal for a GTX 580, right?

My old 750w PSU has 3x 6pin pcie connectors. The 580 needs a 6pin and an 8pin. So I had to use a Y connector which takes 2 6pins and makes an 8pin. I tried switching the three of them around with no change. I had a new thought tonight. With all my 5870s, they each included a Y connector to convert two old IDE plugs to a 6pin PCI plug. So a double Y from several IDE cables into the Y 8pin connector seems to be working! The PCI rails in my PSU are dead. Looking forward to my new PSU to make this cable mess clean. The point is, it's the PSU, and the PCI rails FAIL.
____________
Dublin, California
Team: SETI.USA

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 20136 - Posted: 7 Jan 2011 | 11:20:55 UTC - in response to Message 20133.

Ah! Looks like I was on the right track with the PSU.

http://www.gpugrid.net/result.php?resultid=3534780

Normal for a GTX 580, right


It's nearly normal. I think your CPU limits the performance of your GTX 580, or it still may be the PSU. See this task (processed on my overcklocked GTX 580) for speed reference :) Or this same type task processed on my overclocked GTX 480. You can find other GTX 580s for reference on the 'top hosts' list.

My old 750w PSU has 3x 6pin pcie connectors. The 580 needs a 6pin and an 8pin. So I had to use a Y connector which takes 2 6pins and makes an 8pin.

This could be a dangerous way, if your PSU has separate 12V rails and you connect them together with this Y cable connector converter.

I tried switching the three of them around with no change. I had a new thought tonight. With all my 5870s, they each included a Y connector to convert two old IDE plugs to a 6pin PCI plug. So a double Y from several IDE cables into the Y 8pin connector seems to be working!

This is the same dangerous method as the previous one with different 12V rails. It's not recommended to use cable converters for power connectors (especially for high current power connectors like the PCI-E or CPU), those add an unnecessary contact resistance in a way of high currents causing voltage loss and hot (even burning) connectors.

The PCI rails in my PSU are dead. Looking forward to my new PSU to make this cable mess clean. The point is, it's the PSU, and the PCI rails FAIL.

That's right. By the way 750W should be enough for a GTX 580.

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20138 - Posted: 7 Jan 2011 | 15:55:28 UTC - in response to Message 20136.

Right. My point was to demonstrate that the PCI rails were the problem, by using different rails. Proof of concept. I agree that the various Y methods are wrong.
____________
Dublin, California
Team: SETI.USA

Profile skgiven
Volunteer moderator
Project tester
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3484
Credit: 825,510,290
RAC: 1,372,175
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20139 - Posted: 7 Jan 2011 | 16:59:01 UTC - in response to Message 20138.

Hi zombie67, good to hear all the details and that the PSU replacement resolved the problem. Your times are spot on now.
Perhaps your GPU was only getting 225W or less due to the connectors. Whatever, I'm impressed with how the GPU handled/survived this.

Good luck,

kts
Send message
Joined: 4 Nov 10
Posts: 21
Credit: 25,973,574
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 20141 - Posted: 8 Jan 2011 | 1:33:54 UTC - in response to Message 20136.

Glad to hear you tracked down the issue for your card and all is well now.


My old 750w PSU has 3x 6pin pcie connectors. The 580 needs a 6pin and an 8pin. So I had to use a Y connector which takes 2 6pins and makes an 8pin.

This could be a dangerous way, if your PSU has separate 12V rails and you connect them together with this Y cable connector converter.
<~~~snip~~~>
It's not recommended to use cable converters for power connectors (especially for high current power connectors like the PCI-E or CPU), those add an unnecessary contact resistance in a way of high currents causing voltage loss and hot (even burning) connectors.
<~~~snip~~~>
By the way 750W should be enough for a GTX 580.


This information added to the recent Selecting a PSU for dual GTX570 / 580 use thread fills in more important details for system building. Your problem has now been Y-converted to a useful solution for others.
(Still looking for recipes ;) )

Profile Ascholten
Send message
Joined: 21 Dec 10
Posts: 7
Credit: 78,122,357
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20146 - Posted: 9 Jan 2011 | 20:10:53 UTC - in response to Message 20136.

I had to put a new power supply in my system when I got the GTX570. That thing will take a LOT of amps just by itself when you load it up. I dont remember the numbers but I think mine was asking for 190 Watts from 0 to full processor loading on the card. If I am not mistaken the card recommends a 550 watt supply at the very minimum. Now don't forget any other stuff you have in your system, RAMdisk, multiple HDD's, a lot of ram, a 6 core processor??? all that stuff EATS power quickly.

I put a 1KW supply in my computer and it keeps everything running fine, AFTER I burnt my earlier 600 watt supply up.

Don't be cheap with your PS, if the thing cooks off, you could get a shot of your mains power straight up your cards backside before the fuse blows. Although it would look awesome, a GTX is a fairly expensive paperweight. Power supply manufacturers may claim that can never happen but I have a hard drive with chips blown off the board to prove otherwise.

Aaron

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20158 - Posted: 12 Jan 2011 | 22:57:59 UTC

It looks like my hunch was wrong. New 1000w PSU, and still no joy. I am down to thinking that the card is throttling itself. According to anandtech:

Much like GDDR5 EDC complicated memory overclocking, power throttling would complicate overall video card overclocking, particularly since there’s currently no way to tell when throttling kicks in. On AMD cards the clock drop is immediate, but on NVIDIA’s cards the drivers continue to report the card operating at full voltage and clocks. We suspect NVIDIA is using a NOP or HLT-like instruction here to keep the card from doing real work, but the result is that it’s completely invisible even to enthusiasts. At the moment it’s only possible to tell if it’s kicking in if an application’s performance is too low. It goes without saying that we’d like to have some way to tell if throttling is kicking in if NVIDIA fully utilizes this hardware.


Maybe that is what's happening here.
____________
Dublin, California
Team: SETI.USA

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20159 - Posted: 13 Jan 2011 | 0:10:57 UTC

Forgot to finish my thought in the previous post. I have RMA'd the card. Let's see if I have better luck with the replacement.
____________
Dublin, California
Team: SETI.USA

Profile Retvari Zoltan*
Avatar
Send message
Joined: 20 Jan 09
Posts: 919
Credit: 3,463,774,437
RAC: 4,320,304
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 20161 - Posted: 13 Jan 2011 | 3:01:14 UTC - in response to Message 20158.

It looks like my hunch was wrong. New 1000w PSU, and still no joy. I am down to thinking that the card is throttling itself. According to anandtech:

Much like GDDR5 EDC complicated memory overclocking, power throttling would complicate overall video card overclocking, particularly since there’s currently no way to tell when throttling kicks in. On AMD cards the clock drop is immediate, but on NVIDIA’s cards the drivers continue to report the card operating at full voltage and clocks. We suspect NVIDIA is using a NOP or HLT-like instruction here to keep the card from doing real work, but the result is that it’s completely invisible even to enthusiasts. At the moment it’s only possible to tell if it’s kicking in if an application’s performance is too low. It goes without saying that we’d like to have some way to tell if throttling is kicking in if NVIDIA fully utilizes this hardware.


Maybe that is what's happening here.


It would be much easier to give you more useful advice if you would be more specific on your component types. I am using a 1000W PSU (LC-Power Legion X2) for a dual GPU configuration (GTX 480 + GTX 580), and I have no such problems, even when I'm overclocked my GTX 580 to 900MHz. Now it's running at 850MHz at factory voltage (1.050V). So if the cause of the slowness is the protective throttling, it's too sensitive on your GPU only, therefore a replacement should work fine. It's designed to protect GPUs from overloads caused by GPU stress test utilities such as furmark - 'real' GPU applications (including GPUgrid) cannot cause that much power draw, and should not trigger this throttling. But if the new one is also slow, it must be some other (hardware or software) component we can't think of. Maybe a screensaver. BOINC CPU tasks running at low priority level, while GPU tasks at below normal priority level (it's higher than 'low'), so if you run other CPU demanding applications those will run at normal priority (it's higher than both CPU and GPU tasks) and will hold up BOINC CPU and GPU tasks (or slow them down a bit). There are tools for changing priority levels (I'm using eFMer priority). Raising priority levels however can make your computer less responsive, or even unresponsive.
You can monitor your GPU with MSI Afterburner 2.1 beta 5. KASHIF_HIVPR type tasks should produce 90-95% GPU usage.

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 161
Credit: 250,721,498
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20162 - Posted: 13 Jan 2011 | 3:09:41 UTC - in response to Message 20161.

I was not clear. I RMA'd the GPU, not the PSU. The new PSU is a 1000w coolermaster. And no, no screensaver is being used. Also, this is a dedicated cruncher. No other tasks are runing.
____________
Dublin, California
Team: SETI.USA

Post to thread

Message boards : Graphics cards (GPUs) : GTX580 specifications