Advanced search

Message boards : Graphics cards (GPUs) : Help with my GTX 980

Author Message
mymbtheduke
Send message
Joined: 3 Sep 12
Posts: 40
Credit: 186,780,650
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 40808 - Posted: 9 Apr 2015 | 22:14:50 UTC

So just got a used Zotac AMP 980 on Ebay. Installed it this afternoon and has been running for a few hours. Wow! it gets hot. Like 72C hot. The backplate is almost too hot to touch. I backed the TDP from 80% down to 70%. The GPU clock went from 1292 to 1227. Temp is now 64C.

Do the 980s always get this hot? Is that common?
Should I remove the back plate? Won't that help cool the card? My 660ti didn't have a back plate.

I also noticed that the GPU load was at 80%. I cut back on 2 WCG WUs so my i7 won't hyperthread and the GPU load went up to 84%. I never thought my rig wouldn't be able to feed this beast. I have an i7 at 4Ghz with 2100 RAM. Wow.

Are there any other tweaks I should be doing to up the GPU load?

Thanks as always.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40810 - Posted: 9 Apr 2015 | 23:14:51 UTC - in response to Message 40808.

So just got a used Zotac AMP 980 on Ebay. Installed it this afternoon and has been running for a few hours. Wow! it gets hot. Like 72C hot. The backplate is almost too hot to touch. I backed the TDP from 80% down to 70%. The GPU clock went from 1292 to 1227. Temp is now 64C.

Do the 980s always get this hot?

Yes.

Is that common?

Yes. These cards have to dissipate 165W (at most), while the GTX 660Ti has only 150W TDP, but because it's superscalar, not all of the CUDA cores could be used by the GPUGrid client, so the real TDP of a GTX 660Ti is much lower than a GTX980's.

Should I remove the back plate?

Definitely not.

Won't that help cool the card?

No, it won't.

My 660ti didn't have a back plate.

It's because a GTX660Ti is a much shorter card, and the back plate is a high-end feature.

I also noticed that the GPU load was at 80%. I cut back on 2 WCG WUs so my i7 won't hyperthread and the GPU load went up to 84%. I never thought my rig wouldn't be able to feed this beast. I have an i7 at 4Ghz with 2100 RAM. Wow.

Blame it on the Windows Display Driver Model architecture of modern Windows OSes.

Are there any other tweaks I should be doing to up the GPU load?

You could try to set the swan_sync environmental variable to make the GPUGrid client use a full CPU core.
Start button ->
type systempropertiesadvanced to the search box ->
press enter, or click on the result ->
click on the "environmental variables" button near the bottom ->
click on the "new" button near the bottom (system variables section) ->
variable name: swan_sync ->
variable value: 1 ->
click OK 3 times ->
exit BOINC manager with stopping all scientific applications ->
restart BOINC manager

Thanks as always.

You're welcome.

mymbtheduke
Send message
Joined: 3 Sep 12
Posts: 40
Credit: 186,780,650
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 40811 - Posted: 9 Apr 2015 | 23:47:25 UTC - in response to Message 40810.

Thank you for the quick reply. This is the first time I have had a card better than a 660ti or $250 type card. I think I will save for another 980. Then I can run two 980s and the 750. Max this board out.

JM
Send message
Joined: 18 Mar 09
Posts: 5
Credit: 624,501,954
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 40885 - Posted: 15 Apr 2015 | 22:37:58 UTC - in response to Message 40810.

Hi!

variable name: swan_sync ->
variable value: 1 ->

Shouldn't it be value=0 ?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 40886 - Posted: 15 Apr 2015 | 23:29:32 UTC - in response to Message 40885.

Hi!
variable name: swan_sync ->
variable value: 1 ->

Shouldn't it be value=0 ?

It's value doesn't matter, only its presence. See this post.
The "recommended" value is 1. See this post.

Imakuni
Send message
Joined: 12 Nov 14
Posts: 11
Credit: 33,833,700
RAC: 0
Level
Val
Scientific publications
watwat
Message 41175 - Posted: 27 May 2015 | 22:07:42 UTC - in response to Message 40810.


You could try to set the swan_sync environmental variable to make the GPUGrid client use a full CPU core.
Start button ->
type systempropertiesadvanced to the search box ->
press enter, or click on the result ->
click on the "environmental variables" button near the bottom ->
click on the "new" button near the bottom (system variables section) ->
variable name: swan_sync ->
variable value: 1 ->
click OK 3 times ->
exit BOINC manager with stopping all scientific applications ->
restart BOINC manager


Disclaimer: Sry to ressurrect the thread.

Anyway, I tried the above method on my Gtx 970. It seems that I go from ~70% GPU usage (with 10~15% CPU) to ~75% GPU usage (with 30~33% CPU usage). With 2 tasks running at a time, I go from 80% to 90%.

Is this how much it was supposed to be, or should it be going higher? Also, can you tell me a place where I can read what's actually going on? I like learning how things work, and this swan method seems rather intriguing...

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41180 - Posted: 28 May 2015 | 15:43:15 UTC - in response to Message 41175.

Anyway, I tried the above method on my Gtx 970. It seems that I go from ~70% GPU usage (with 10~15% CPU) to ~75% GPU usage (with 30~33% CPU usage). With 2 tasks running at a time, I go from 80% to 90%.

Is this how much it was supposed to be, or should it be going higher?

On modern Windows OS's that much you can get. You can gain a little more if you lessen the number of CPU tasks running.

Also, can you tell me a place where I can read what's actually going on?

1. The WDDM (Windows Display Driver Model) is to blame for that lower GPU usage.
2. The GPU usage shown by the corresponding utilities is not equal to the ratio of the CUDA cores utilized. The latter could be measured by the power draw of the card. Older CUDA apps may show higher GPU usage, while the card's power draw is lower than when the GPUGrid's app running.

I like learning how things work, and this swan method seems rather intriguing...

The swan method acts a very simple way: it continuously polls the GPU to see if it's finished a piece of calculation instead of waiting for the whole GPU subsystem to warn the app about it, so the latency of the latter is eliminated. The latency caused by the WDDM overhead cannot be eliminated. You have to use such an OS which doesn't have it (WinXP, or Linux)

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41183 - Posted: 28 May 2015 | 18:50:53 UTC
Last modified: 28 May 2015 | 18:54:00 UTC

Good info, Retvari!

Also, reporting CPU Usage in numbers like "10~15% CPU" and "30~33% CPU" is... too vague. Look at Task Manager or Process Explorer, look at the Process details for a column called CPU. You can use this to figure out how many logical processors are utilized by the acemd process.

For example, on my 4-core hyperthreaded machine, I have 8 logical CPUs. If a process is using a full virtual core, it'd show "12.5" in the CPU column of Process Explorer.

For me...
- A GPUGrid task (acemd) without Swan_Sync uses about 0.8-1.5 in that CPU column ... meaning about 0.064 to 0.12 CPU (all I did was divide by 12.5)
- A GPUGrid task (acemd) with Swan_Sync uses 12.5 in that CPU column ... meaning about 1.0 CPU.

Details matter. And also, that should help you to understand Swan_Sync. I run without it, because I have several CPU tasks that I don't want to steal CPU from.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41187 - Posted: 28 May 2015 | 22:35:49 UTC - in response to Message 41183.
Last modified: 28 May 2015 | 22:46:33 UTC

The WDDM issue appears to be exasperated with bigger (more powerful)/newer NV GPU's. However it may be the case that the apps cannot yet fully exploit the potential of the more recent GPU architectures (or not, or when running some task types; we know performance over the latest NV generation of GPU's varies with task type). It would be interesting to see if there was any gain from running 2 WU's on XP-x64, 2003R2server, or Linux. If there was, assumptions would need to be rewritten for the GTX900 generation.
By running multiple apps (usually 2) on GTX 900 series GPU's in recent MS operating systems (Vista onwards) you can often improve overall throughput, relative to the same environment with 1 task. What's behind this is the question.

On my i7-3770K W7-x64 system the GPU usage according to MSI Afterburner is around 80% for both GTX970's. This is because I've limited Boinc to use 70% of the logical processors, the CPU and probably the RAM (to some extent).
While I would like to use every thread of every CPU core I have, the reality is logical cores are not real cores and when you start overusing the CPU the GPU's (which typically do 10+ times the work of the entire CPU) suffer.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41189 - Posted: 28 May 2015 | 22:57:06 UTC
Last modified: 28 May 2015 | 23:18:59 UTC

More great points skgiven!

I used to think that it was best to keep the CPUs fully saturated, but I have changed my stance on that, for a couple reasons. First reason is that GPU usage does indeed suffer, and I'd prefer to keep the GPU tasks processing as fast as possible while still doing some CPU work. Second reason deals with system responsiveness. I've found that user interaction is more responsive, if the CPU is not fully saturated.

So, nowadays, I typically use app_config configuration to specify the CPU usage of all my GPU tasks. I routinely run GPUGrid tasks 6-at-a-time (since I have 3 GPUs), and I set it to budget 0.4 CPU usage per task, so that the 6 acemd processes can bounce around the (0.4 * 6 = 2.4, rounded down by BOINC's scheduler = 2) budgeted CPUs. Additionally, I set BOINC to use 88% of my 8 virtual CPUs, so a virtual core is always available to be completely responsive to my actual work, even when the system would be CPU-saturated if no GPU task was currently being worked.

That's as optimal as I could make it, and it's worked really well for me.

Post to thread

Message boards : Graphics cards (GPUs) : Help with my GTX 980

//