Advanced search

Message boards : Server and website : CUDA 65?

Author Message
Michael P. Gainor
Send message
Joined: 23 Jan 16
Posts: 15
Credit: 69,388,188
RAC: 0
Level
Thr
Scientific publications
wat
Message 43682 - Posted: 2 Jun 2016 | 2:39:47 UTC

This may be an easy question but i noticed in BOINC it always listed (CUDA65). I always assumed that meant CUDA 6.5, Is this correct? If it is are their any plans to upgrade to 7 or 8 and if it doesn't mean CUDA 6.5, what does it mean and what CUDA do we use?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43684 - Posted: 2 Jun 2016 | 11:47:44 UTC - in response to Message 43682.

This may be an easy question but i noticed in BOINC it always listed (CUDA65).
I always assumed that meant CUDA 6.5, Is this correct?
It's correct.

If it is are their any plans to upgrade to 7 or 8?
There's no plans to upgrade, but the arrival of the new GPU generation (Pascal) could change this.

Michael P. Gainor
Send message
Joined: 23 Jan 16
Posts: 15
Credit: 69,388,188
RAC: 0
Level
Thr
Scientific publications
wat
Message 43686 - Posted: 2 Jun 2016 | 18:26:28 UTC - in response to Message 43684.

I wonder if CUDA efficiency is why I see GPU Grid run about 75% of GPU load on my 980ti while programs that don't use CUDA like Folding@home use in the mid 90s. ie they have to use more brute force to get the same performance.

Betting Slip
Send message
Joined: 5 Jan 09
Posts: 670
Credit: 2,498,095,550
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43687 - Posted: 2 Jun 2016 | 19:38:04 UTC - in response to Message 43686.
Last modified: 2 Jun 2016 | 19:38:36 UTC

I wonder if CUDA efficiency is why I see GPU Grid run about 75% of GPU load on my 980ti while programs that don't use CUDA like Folding@home use in the mid 90s. ie they have to use more brute force to get the same performance.


No, it's because you are on a windows platform and WDDM has the greater effect on GPU utilization.

Michael P. Gainor
Send message
Joined: 23 Jan 16
Posts: 15
Credit: 69,388,188
RAC: 0
Level
Thr
Scientific publications
wat
Message 43688 - Posted: 2 Jun 2016 | 19:50:03 UTC - in response to Message 43687.
Last modified: 2 Jun 2016 | 19:50:50 UTC

Then why is Folding@home not being affected the same way by WDDM, its hitting Mid 90s?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43689 - Posted: 2 Jun 2016 | 19:51:33 UTC - in response to Message 43686.

I wonder if CUDA efficiency is why I see GPU Grid run about 75% of GPU load on my 980ti
No, the WDDM overhead is to blame for that.

while programs that don't use CUDA like Folding@home use in the mid 90s. ie they have to use more brute force to get the same performance.
As far as I know folding@home is CUDA, but it's using an older version.
The GPU usage reading could be misleading, a better measurement of the real GPU usage is it's power consumption. Don't be surprised if the power consumption of your GPU is higher while running the GPUGrid app, while it's GPU usage reading is lower.

Michael P. Gainor
Send message
Joined: 23 Jan 16
Posts: 15
Credit: 69,388,188
RAC: 0
Level
Thr
Scientific publications
wat
Message 43690 - Posted: 2 Jun 2016 | 21:08:04 UTC - in response to Message 43689.

As far as I know folding@home is CUDA, but it's using an older version.


They used to but up until recently they they switched to OPENCL with the possibility to go back to CUDA if the performance gains where good enough to warrant it, since OPENMM does have the ability to support CUDA. That's why i was surprised by my numbers.


The GPU usage reading could be misleading, a better measurement of the real GPU usage is it's power consumption. Don't be surprised if the power consumption of your GPU is higher while running the GPUGrid app, while it's GPU usage reading is lower.


I doubled checked the TDP and they are about the same with Folding@home having a slightly higher number of 64.6% TDP and a much higher GPU load of 93%. While GPU GRID saw TDP of 62.6% and about what i normally see of 71% GPU load. All this is coming from GPU Z with no other app running. Any Thoughts?

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43691 - Posted: 2 Jun 2016 | 22:26:20 UTC - in response to Message 43690.

I doubled checked the TDP and they are about the same with Folding@home having a slightly higher number of 64.6% TDP and a much higher GPU load of 93%. While GPU GRID saw TDP of 62.6% and about what i normally see of 71% GPU load. All this is coming from GPU Z with no other app running. Any Thoughts?

It is the same for me on Win7 64-bit. My two GTX 960s and a 970 consume pretty much the same power on Folding and GPUGrid. I keep track of it for heating purposes, though not exactly, and use both GPU-Z and the power meter on my UPS. But the GPU% is higher on Folding than on GPUGrid, as you note. It may be the difference between the OpenCl that Folding uses, and the CUDA that is used here.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43692 - Posted: 2 Jun 2016 | 22:45:46 UTC - in response to Message 43690.
Last modified: 2 Jun 2016 | 22:47:53 UTC

Any Thoughts?
There's one more thing you can do: to set the SWAN_SYNC environmental value to make the GPUGrid app to use a full CPU thread to feed the GPU. You should reduce the number of usable CPUs in BOINC manager by 1 at the same time: options -> computing preferences -> (on the computing tab) Use at most XXX % of the CPUs. XXX should be reduced by 100/the number of threads your CPU has, for example for a 8-thread CPU you should reduce it by 12.5%.

Start button ->
type systempropertiesadvanced to the search box ->
press enter, or click on the result ->
click on the "Environmental variables" button near the bottom ->
click on the "New" button near the bottom (system variables section) ->
variable name: SWAN_SYNC ->
variable value: 1 ->
click OK 3 times ->
exit BOINC manager with stopping all scientific applications ->
restart BOINC manager

Michael P. Gainor
Send message
Joined: 23 Jan 16
Posts: 15
Credit: 69,388,188
RAC: 0
Level
Thr
Scientific publications
wat
Message 43694 - Posted: 3 Jun 2016 | 1:38:10 UTC - in response to Message 43692.

I attempted it but it didn't seem to work. But you could be on to something Folding@home even with CPU projects disabled uses about 15%-20% from each of my 8 threads. That could be why it is able to achieve higher GPU Load. Though as you mentioned even with the higher load of 20% the TDP is only slightly higher?

[CSF] Aleksey Belkov
Avatar
Send message
Joined: 26 Dec 13
Posts: 85
Credit: 1,215,531,270
RAC: 182,549
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43701 - Posted: 3 Jun 2016 | 16:00:57 UTC - in response to Message 43684.

(system variables section) ->
variable name: SWAN_SYNC ->
variable value: 1 ->


I noticed that in all the discussions on the forum GPUGRID, the value for this variable is set to 0. Proof

So what value is right?

And if I already set for GPUGRID apps <cpu_usage>1</cpu_usage> in app_config, would it give any benefits?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43707 - Posted: 4 Jun 2016 | 1:44:42 UTC - in response to Message 43701.

(system variables section) ->
variable name: SWAN_SYNC ->
variable value: 1 ->

I noticed that in all the discussions on the forum GPUGRID, the value for this variable is set to 0. Proof

So what value is right?
It's value doesn't matter, only its presence. See this post.
The "recommended" value is 1. See this post.
So, you can set any value for now, but for future app versions maybe "1" is more appropriate.

And if I already set for GPUGRID apps <cpu_usage>1</cpu_usage> in app_config, would it give any benefits?
It tells the BOINC manager that the GPUGrid app is using 1 full CPU thread, regardless what it reports about itself for the BOINC manager, but that won't make the GPUGrid app to use a full CPU thread. So if you set SWAN_SYNC, it's practical to make an app_config for acemdlong and acemdshort with setting <cpu_usage>1</cpu_usage>.

Michael P. Gainor
Send message
Joined: 23 Jan 16
Posts: 15
Credit: 69,388,188
RAC: 0
Level
Thr
Scientific publications
wat
Message 43708 - Posted: 4 Jun 2016 | 2:36:44 UTC - in response to Message 43707.

How will i know if it worked/ or that I did it correctly.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43712 - Posted: 4 Jun 2016 | 12:34:23 UTC
Last modified: 4 Jun 2016 | 12:39:51 UTC

You will know that the app_config.xml file worked, because when you restart BOINC, Event Log will say:

6/4/2016 8:40:45 AM | GPUGRID | Found app_config.xml

... and your BOINC GPUGrid tasks' statuses will say (x CPUs + y NVIDIA GPUs), where x and y are the values you set in your app_config.xml. Reminder, this is simply a "budgeting" setting, where you're telling BOINC how much CPU/GPU to consider as allocated, when that task runs.

You will know that the SWAN_SYNC Environment Variable worked, because when you look in Task Manager or Process Explorer, you will see that the acemd* processes now use a full thread of CPU, instead of only part of a thread. For instance, on an 8-thread PC, it would show as "12.5 CPU" which is a fully utilized thread, instead of something like "4 CPU" which is only part of a thread.

Note: For me, I don't use SWAN_SYNC, because you only get a small boost of performance, at the cost of spending an entire CPU thread. I'd prefer a CPU app be able to use the remainder of that thread. Also, I use 0.5 CPU and 0.5 GPU for my GPUGrid app_config.xml settings, so the tasks can run 2-tasks-per-GPU, and budget 1 CPU thread for every 2 tasks running.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43714 - Posted: 4 Jun 2016 | 16:11:57 UTC - in response to Message 43712.

I use 0.5 CPU and 0.5 GPU for my GPUGrid app_config.xml settings, so the tasks can run 2-tasks-per-GPU, and budget 1 CPU thread for every 2 tasks running.
That's another way to maximize GPU usage, but it's recommended only for the GTX980Ti GTX980 GTX970, as it will double the runtime, and your host can easily miss the 24h bonus.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43715 - Posted: 4 Jun 2016 | 16:13:38 UTC - in response to Message 43714.
Last modified: 4 Jun 2016 | 16:13:58 UTC

I use 0.5 CPU and 0.5 GPU for my GPUGrid app_config.xml settings, so the tasks can run 2-tasks-per-GPU, and budget 1 CPU thread for every 2 tasks running.
That's another way to maximize GPU usage, but it's recommended only for the GTX980Ti GTX980 GTX970, as it will double the runtime, and your host can easily miss the 24h bonus.



:) I don't care at all about the bonus. I care about throughput. And, in most cases, this appears to increase my throughput about 20% for the project. I do 2-GPUGrid-tasks-at-a-time, even on my "lowly" GTX 660 Ti GPUs.

Post to thread

Message boards : Server and website : CUDA 65?

//