Advanced search

Message boards : Graphics cards (GPUs) : Curiosity Question about GPU memory

Author Message
Angelique
Send message
Joined: 27 Oct 12
Posts: 14
Credit: 29,337,200
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 40876 - Posted: 15 Apr 2015 | 0:15:59 UTC

I have a question, this is more for curiosity, I read that running multiple tasks on one gpu would need more gpu memory. just out of curiosity, could you increase a gpu's memory, i read in google that 3d analyzer could do it, but would it even really work with a high end gpu like a 780? and would it be beneficial, i know i have a ton of ram thats not even being used at all.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41029 - Posted: 4 May 2015 | 21:42:11 UTC - in response to Message 40876.
Last modified: 4 May 2015 | 21:42:30 UTC

...and would it be beneficial, i know i have a ton of ram thats not even being used at all.

Your GTX780 has 3GB GDDR5. That is sufficient to run 2 tasks at a time, however it's probably not 'beneficial' or not much on that GPU (W8.1). On the GTX970, GTX980 and GTX Titan X it is beneficial.

could you increase a gpu's memory,

No - you would basically need to redesign the GPU. You can also buy cards with 4GB, 6GB and 12GB GDDR5, so there would be no point upgrading.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41030 - Posted: 4 May 2015 | 21:48:00 UTC

skgiven:

What do you use to determine whether it is beneficial to run 2-at-a-time or not? I'm in communication with the BOINC devs, and we're thinking of adding something like "Progress % increase per minute" as a column in the Tasks grid, to help a user assess. Would you find that useful at all? I can't really look at results very easily, since my tasks bounce around between my GPU types too often.

Also, it's possible that this user was requesting that the app itself "consume" more GPU RAM. The answer to that question, so far as I know, is: The app already makes a decision on how much RAM to use -- it will use less RAM on a less-capable GPU. That's from my testing a couple years ago, but it probably also holds true today.

Regards,
Jacob

Angelique
Send message
Joined: 27 Oct 12
Posts: 14
Credit: 29,337,200
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 41059 - Posted: 10 May 2015 | 21:28:08 UTC - in response to Message 41030.


What do you use to determine whether it is beneficial to run 2-at-a-time or not? I'm in communication with the BOINC devs, and we're thinking of adding something like "Progress % increase per minute" as a column in the Tasks grid, to help a user assess. Would you find that useful at all? I can't really look at results very easily, since my tasks bounce around between my GPU types too often.

Also, it's possible that this user was requesting that the app itself "consume" more GPU RAM. The answer to that question, so far as I know, is: The app already makes a decision on how much RAM to use -- it will use less RAM on a less-capable GPU. That's from my testing a couple years ago, but it probably also holds true today.


I was refering to the actual size of the gpu memory not the tasks memory, which from googling i have found it is actually possible to increase gpu memory using system memory, but i didnt need to try that since my windows already dedicated system memory to the gpu for some reason, windows put in 8 extra gb of vritual memory to the gpu on its own, which i found strange, but as skgiven has said gpu grid can run 2 tasks with the 3gb the 780 has.

but my system has 11gb total gpu memory, by default 3gb gpu memory and 8 gb windows allocated to it.

I have monitored my gpu memory usage but never needed more then 3gb, how ever from googling, i have found that nvidia gpus have been given virtual memory starting with directx 10.

http://www.anandtech.com/show/2116/2

so anyway my point is, it is possible to increase the memory but running 2 gpu tasks at the same time doesnt fill the gpu memory like i thought it would.

mikey
Send message
Joined: 2 Jan 09
Posts: 292
Credit: 2,238,228,615
RAC: 10,963,121
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41060 - Posted: 11 May 2015 | 10:52:32 UTC - in response to Message 41030.

skgiven:

What do you use to determine whether it is beneficial to run 2-at-a-time or not? I'm in communication with the BOINC devs, and we're thinking of adding something like "Progress % increase per minute" as a column in the Tasks grid, to help a user assess. Would you find that useful at all? I can't really look at results very easily, since my tasks bounce around between my GPU types too often.

Also, it's possible that this user was requesting that the app itself "consume" more GPU RAM. The answer to that question, so far as I know, is: The app already makes a decision on how much RAM to use -- it will use less RAM on a less-capable GPU. That's from my testing a couple years ago, but it probably also holds true today.

Regards,
Jacob


I use gpu-z and if the gpu is not being used at 95% or more it is generally okay to try a 2nd, 3rd, etc unit to see if it is faster.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41061 - Posted: 11 May 2015 | 12:02:58 UTC

How do you measure if 2-at-a-time is faster than 1-at-a-time? Do you just look at GPU Usage, and assume that "higher GPU Usage" means that throughput is faster? Because, that may not be the case...

The most direct test would be to do several tasks 1-at-a-time, then do several tasks 2-at-a-time, and then compare apples to apples by comparing tasks from the same batch. But that's tedious, and especially tricky when you have tasks that bounce around between different GPUs in the system.

That's why we're thinking of adding a "Progress % per min" column, maybe.

mikey
Send message
Joined: 2 Jan 09
Posts: 292
Credit: 2,238,228,615
RAC: 10,963,121
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41115 - Posted: 20 May 2015 | 11:36:20 UTC - in response to Message 41061.

How do you measure if 2-at-a-time is faster than 1-at-a-time? Do you just look at GPU Usage, and assume that "higher GPU Usage" means that throughput is faster? Because, that may not be the case...

The most direct test would be to do several tasks 1-at-a-time, then do several tasks 2-at-a-time, and then compare apples to apples by comparing tasks from the same batch. But that's tedious, and especially tricky when you have tasks that bounce around between different GPUs in the system.

That's why we're thinking of adding a "Progress % per min" column, maybe.


I do it by adding concurrent tasks until the time to completion is slower than running each on by itself. So as you said I run one task logging the time it took, then I add a 2nd task to run at the same time and again check the time to complete both units, if it more than double it isn't worth it.

There are other issues too such as heat, if 2 tasks causes the gpu to heat into the upper 70's C or even the 80's that too is a no go for me. No it's not very scientific and is alot of trial and error and what works on this pc doesn't always work on that pc, a 6 core cpu may handle multiple tasks at the same time much better than a dual core does for example. In most cases you also have to give up cpu cores just to keep the gpu fed and running as fast as possible, but again not in all cases. The more gpu tasks you run at the same time the more you need too though.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41116 - Posted: 20 May 2015 | 11:55:41 UTC

Yeah, but you also have to make sure to compare task types -- compare 1-at-a-time of a given type, against 2-at-a-time of a given type.

With all of the task type variability at GPUGrid, and the long task completion times, and the fact that tasks can bounce between my GPUs when I pause/resume... it all means that I cannot easily do the computations that you suggest.

I did them for POEM a long time ago, to prove that, for me, 3-at-a-time worked better than 1 or 2 or 4... but GPUGrid is much more difficult to accurately get results used for the comparisons I'm interested in. :)

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41117 - Posted: 20 May 2015 | 12:10:09 UTC
Last modified: 20 May 2015 | 12:26:27 UTC

That being said, though... I like your technique mikey, and I just did a couple 1-minute tests using them. I happen to have 6 GERARD_FXCXLC12_LIG tasks, so I did some testing on my GTX 970, while suspending everything else.

You might like the results. Here goes:

GTX 970:

1-at-a-time: (80% GPU Usage)
Start %: 1.429
End %: 1.577
Difference: 0.148 %/min

2-at-a-time: (88% GPU Usage)
Start Task 1 %: 1.601
End Task 1 %: 1.680
Difference: 0.079 %/min
Start Task 2 %: 71.944
End Task 2 %: 72.024
Difference: 0.08 %/min


So... The 1-at-a-time-value 0.148 %/min is actually less than the sum of the 2-at-a-time-values, 0.159 %/min. So, I'll continue to run 2-at-a-time with these numbers.

Edit: I think I'll now perform the same test on my GTX 660 Ti.

GTX 660 Ti:
1-at-a-time: (90% GPU Usage)
Start %: 34.062
End %: 34.154
Difference: 0.092 %/min

2-at-a-time: (94% GPU Usage)
Start Task 1 %: 34.236
End Task 1 %: 34.282
Difference: 0.046 %/min
Start Task 2 %: 19.780
End Task 2 %: 19.828
Difference: 0.048 %/min


Again, the 1-at-a-time-value 0.092 %/min is less than sum of the 2-at-a-time-values, 0.094 %/min... But this one might be nearly too close to call :) I think 2-at-a-time may be "definitely beneficial" on my GTX 970, but "only marginally/questionably better" on my GTX 660 Ti GPUs. For reference, my system has 3 GPUs: 1 GTX 970, 2 GTX 660 Ti's.. and tasks bounce between them all the time, as I suspend/resume often.

I'll continue to run 2-at-a-time, with these numbers. Thanks for the idea! And for the confidence in knowing that throughput is better, despite tasks taking a long long time :) I generally barely miss the 24-hour bonus when running 2-at-a-time, but my overall throughput is a little higher as compared to 1-at-a-time... so I consider that a win for GPUGrid!

PS: Sorry all for hijacking this thread. :(

mikey
Send message
Joined: 2 Jan 09
Posts: 292
Credit: 2,238,228,615
RAC: 10,963,121
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41120 - Posted: 21 May 2015 | 10:14:15 UTC - in response to Message 41117.

That being said, though... I like your technique mikey, and I just did a couple 1-minute tests using them. I happen to have 6 GERARD_FXCXLC12_LIG tasks, so I did some testing on my GTX 970, while suspending everything else.

You might like the results. Here goes:

GTX 970:
1-at-a-time: (80% GPU Usage)
Start %: 1.429
End %: 1.577
Difference: 0.148 %/min

2-at-a-time: (88% GPU Usage)
Start Task 1 %: 1.601
End Task 1 %: 1.680
Difference: 0.079 %/min
Start Task 2 %: 71.944
End Task 2 %: 72.024
Difference: 0.08 %/min


So... The 1-at-a-time-value 0.148 %/min is actually less than the sum of the 2-at-a-time-values, 0.159 %/min. So, I'll continue to run 2-at-a-time with these numbers.

Edit: I think I'll now perform the same test on my GTX 660 Ti.

GTX 660 Ti:
1-at-a-time: (90% GPU Usage)
Start %: 34.062
End %: 34.154
Difference: 0.092 %/min

2-at-a-time: (94% GPU Usage)
Start Task 1 %: 34.236
End Task 1 %: 34.282
Difference: 0.046 %/min
Start Task 2 %: 19.780
End Task 2 %: 19.828
Difference: 0.048 %/min


Again, the 1-at-a-time-value 0.092 %/min is less than sum of the 2-at-a-time-values, 0.094 %/min... But this one might be nearly too close to call :) I think 2-at-a-time may be "definitely beneficial" on my GTX 970, but "only marginally/questionably better" on my GTX 660 Ti GPUs. For reference, my system has 3 GPUs: 1 GTX 970, 2 GTX 660 Ti's.. and tasks bounce between them all the time, as I suspend/resume often.

I'll continue to run 2-at-a-time, with these numbers. Thanks for the idea! And for the confidence in knowing that throughput is better, despite tasks taking a long long time :) I generally barely miss the 24-hour bonus when running 2-at-a-time, but my overall throughput is a little higher as compared to 1-at-a-time... so I consider that a win for GPUGrid!

PS: Sorry all for hijacking this thread. :(


Real world tests are always helpful!!
I am glad it worked out for you and gpugrid too!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41199 - Posted: 29 May 2015 | 14:30:47 UTC - in response to Message 41120.
Last modified: 29 May 2015 | 14:41:12 UTC

Much said on this already, and I'm going over a lot of it but,

You can change CPU usage in Boinc from 50% up to 100% and usually see changes in GPU usage and GPU Power usage indicating a decrease in GPU work being performed, or vice versa. Similarly just looking at GPU usage and Power usage (MSI Afterburner) will indicate the overall GPUGrid work being done, but it will not give you accurate performance measurements and these are required to determine if running multiple tasks is genuinely feasible.

For GPUGrid, if you look at the time it takes to reach 10% you can accurately predict the time it will take to reach 100%. However, if you tried to auto-optimize by running a task to 10% and measuring performance/time and then run 2 tasks to 10% to compare and decide what is optimal, the tasks might crash (resource/slot allocation), and Boinc just sees the app type, not the task types. Also, some tasks have been known to use different amounts of resources (GDDR) throughout the run, so it might appear optimal to run 2 tasks at 5 or 10% but by 50% you might discover that you don’t have enough GDDR and overall you are losing performance.

So it’s really a matter of running 1 task to completion and then 2 tasks (of the same type) on the same GPU and comparing the runtime of 1 task against the average_runtimes/n where n=number of tasks.
As suggested, while running 2 tasks, overall performance varies by task type (and task type combination), and at times there can be many different types of task at GPUGrid.

Performance is also effected by the GPU model/specs, Operating System, Boinc settings (especially CPU usage), the use of SWAN_SYNC, your systems specs and the system usage by user.
In particular, using a high amount of CPU especially for multiple Boinc projects (or scans, gaming defrags…) can result in very mixed runtimes and inaccurate measurements.
Running 2 GPU tasks simply might not work on a system with a significant bottleneck, for example an older system with DDR2 while the same card might be fine in other systems. There are other considerations such as failure rate increase, credit (which reflects the projects needs), task hording (project management) and the impact of the settings on system functionality and usability.

So far as I’m aware, there is no advantage running more than 1 task on Linux, WinXP or 2003server, but I cannot test that theory with a TitanX or GTX980, as I don’t own such cards. I think it might be possible to improve overall performance/task throughput (credit) on a TitanX by running multiple tasks on Linux, XP or 2003server and on Vista+ it might be better to run 3 tasks on a TitanX. If anyone has tried this on those cards please pass on your findings.

For a GTX780Ti (3GB) running 2 task might be worthwhile, or not, depending on the overall performance gain (if any) and what you want.
Generally it seems to most beneficial on the GTX900 series, suggesting an architectural/app issue/significance.

While the most readily noticeable factor is the GPU usage, it can belay performance reality. If GPU usage is over 85% then the overall performance gain may be quite small and any gains offset by an increased failure rate, loss of resources or system functionality. The more the power usage rises the more heat is generated and when the GPU goes over 70C some GPU’s back-off on the boost clocks. Certainly if your clocks drop gains are reduced.

Although I don’t think Boinc dev’s should be overly concerned about app performance, as that’s something for projects to develop and crunchers to optimize, I would like to see a simple web setting akin to that used by Einstein which allows us to enter a value to run multiple tasks or not. With Einstein 1=1 task, 0.5=2 tasks…
Maybe this could be in the next server template (if it’s not already an option on the latest)?

For GPUGrid and other projects it might be preferable to select against the actual task types, as there is no benefit for some tasks but significant benefit when running other tasks. Ideally we would be able to configure this per GPU, as in multiple GPU systems with mixed card types it might not be desirable to try to run 2 WU’s on one or more cards but beneficial on other cards. Perhaps that’s where the Boinc devs could come in, to further redevelop Boinc towards physical objects. That said Boinc still only reports the ‘top’ GPU (which tends to be GPU0, and not actually the top GPU). For example the top GPU in a combination of 2 GTX970’s and one GTX980 should be the GTX980, but that’s not always the case (well, that was the idea but not quite the implementation which IIRC is generation & model codename based rather than performance based; GFlops).
https://www.gpugrid.net/result.php?resultid=14219280
https://www.gpugrid.net/show_host_detail.php?hostid=168841

In theory setups and performances could be reported back to GPUGrid automatically by an add-on app and the data tabulated and interrogated for optimized setups, but I think multi-generation and mixed GPU architectures would quickly skew the data to the point of meaninglessness, as is the case with the GPU type stats page, https://www.gpugrid.net/gpu_list.php

While "Progress % per min" would indicate expected performances, % complete and elapsed time are already there and that’s sufficient for us to calculate total runtime and use Mikey’s technique &/or the GPU Usage indicator of whether or not it’s worthwhile running multiple tasks, and take it from there (test actual performance). Maybe if task run info and history was viewable in the Boinc Manager Statistics it would help?

Going back to the question of using system memory as GPU memory - it's too slow. Would require increased use of the PCIE bus which is very slow compared to the GDDR5 bus. That's why there are GPU's with 12GB GDDR5.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Post to thread

Message boards : Graphics cards (GPUs) : Curiosity Question about GPU memory

//