Message boards : Graphics cards (GPUs) : Important question on BOINC and a machine with Fermi/G200 cards
Author | Message |
---|---|
Can you confirm that you if you have multiple GPUs installed BOINC only gives work to the latest compute capability one? | |
ID: 19062 | Rating: 0 | rate: / Reply Quote | |
Can you confirm that you if you have multiple GPUs installed BOINC only gives work to the latest compute capability one? When the BOINC client requests work from the server, it specifies which of three main hardware classes it needs work for - CPU, NVidia GPU, and/or ATI GPU. It also specifies the details of the devices installed in your computer. I believe that the BOINC server will allocate work - specifically, by app_version - to what the server believes is the most capable device. So in your example, the GPUGrid server would allocate work using the 6.11 (cuda31) application (assuming Windows), because the GTX480 is the more capable device. Similarly, the BOINC client will, by default, only use the most capable device for crunching - so that 6.11 (cuda31) work will be processed, as intended, by the GTX480. But it is possible to set a configuration flag <use_all_gpus>, and many volunteers do this. In this case, work would, additionally, be crunched on the GTX280. But my understanding is that both NVidia GPUs would be allocated work from a single, shared, cache: so the GTX280 (or any lesser card) would also be asked to run the 6.11 (cuda31) application. I don't think there would be any way to specify that two different NVidia GPUs in a single host could receive separate/distinct tasks, or run separate/distinct applications. | |
ID: 19064 | Rating: 0 | rate: / Reply Quote | |
So with <use_all_gpus> both cards receive work. Is it? | |
ID: 19065 | Rating: 0 | rate: / Reply Quote | |
Just confirming that this is the case and it has been observed several times here. We presently recommend grouping Fermi cards only with other Fermi's. All non-Fermi cards can all crunch the same 6.05/6.04 work; CC 1.1, CC 1.2 and CC 1.3 cards. | |
ID: 19067 | Rating: 0 | rate: / Reply Quote | |
Ok, | |
ID: 19068 | Rating: 0 | rate: / Reply Quote | |
Is there any technical reason why the v6.11 Fermi code could not be combined with the v6.04/.05 code into a single 'fat' executable which will run correctly on both types of hardware? This has been achieved by the NVidia programmers who supplied a 'fat' Fermi executable for SETI@home - I have verified that their v6.10 Fermi executable runs correctly on compute capability 1.1 cards. | |
ID: 19077 | Rating: 0 | rate: / Reply Quote | |
I have changed the server policy. | |
ID: 19112 | Rating: 0 | rate: / Reply Quote | |
Hang on a second. <coprocs> I've picked out the lines which the scheduler will have used to assign the app_version. With a compute capability of 1.1 (major/minor), this host should still be running the v6.05 application (??? shouldn't it?), even though the driver has already been upgraded for non-GPUGrid reasons - specifically, testing other BOINC applications which are already multi-architecture compiled. There's no Fermi card in this host to complicate matters, yet the work issued in response to this request was tagged for v6.11, and indeed triggered a first-time download of the v6.11 application: 30/10/2010 12:45:31 GPUGRID Started download of acemd2_6.11_windows_intelx86__cuda31.exe 30/10/2010 12:45:49 GPUGRID Finished download of acemd2_6.11_windows_intelx86__cuda31.exe (and in the process, validated the 'new_app_version' handling of a specialised BOINC client I'm testing): 30/10/2010 12:45:29 [aDCF] in handle_scheduler_reply(), new app_version aDCF initialised: acemd2 611 (cuda31) windows_intelx86. Defaulting to Project DCF 6.050110 I'll see how this runs when the current task finishes, in about 10 hours: but preliminary indications are that you may just have excluded every single non-fermi card with updated drivers from this project until the 3.2 app is ready for distribution. If that is the case, surely the server configuration change should have been deferred until that application is ready? | |
ID: 19117 | Rating: 0 | rate: / Reply Quote | |
Cuda3.1 was fine already for G200. The problem was only on mixed configurations fermi/g200, but the server limitation was ineffective exactly in that case if people had used use_all_gpus, so I removed it. | |
ID: 19119 | Rating: 0 | rate: / Reply Quote | |
OK. Thanks for the reassurance. We'll see how it gets on with my G92b overnight, then. | |
ID: 19120 | Rating: 0 | rate: / Reply Quote | |
Why use 260.89 rather than 260.99? | |
ID: 19126 | Rating: 0 | rate: / Reply Quote | |
Why use 260.89 rather than 260.99? Because I've shut down for a driver upgrade once in this sequence already (when 260.89 first came out), and it's more efficient to leave it running than to shut down again for a second upgrade. Unless you can point to a visible improvement between the two? | |
ID: 19127 | Rating: 0 | rate: / Reply Quote | |
Why use 260.89 rather than 260.99? When I installed .99 over the top of .89 (did a clean install) it didn't require a reboot. But you will need to shut down BOINC while you do it. As for advantages I haven't seen anything mentioned that suggests it fixes up anything other than some games. The installer change wasn't mentioned either (apart from the stuff about .89 installer change). ____________ BOINC blog | |
ID: 19129 | Rating: 0 | rate: / Reply Quote | |
Did you observe any speed loss (frequency) with your 9800GT's? | |
ID: 19132 | Rating: 0 | rate: / Reply Quote | |
Did you observe any speed loss (frequency) with your 9800GT's? Speaking of speed, I'm seeing a HUGE slowdown on all 3 of my GT 240 cards with the 6.11 app. Seems to be in the neighborhood of 60% but very few WUs have been completed because they're running so slow. GPU usage is still above 90% so that's not the problem. I hope we can go back to the 6.05 app quickly as 6.11 seems to be a big waste of electricity. They're all on WinXP64. | |
ID: 19136 | Rating: 0 | rate: / Reply Quote | |
Did you observe any speed loss (frequency) with your 9800GT's? First task reported with v6.11 on 9800GT: host 43362 I don't think I'd call that runtime variation significant until we get more data. But at least it ran, and validated: let's hope the HIVPR_n1 I'm running on another machine is as successful. | |
ID: 19143 | Rating: 0 | rate: / Reply Quote | |
Sorry guys, HIVPR_n1 crashed as usual: host 45218 | |
ID: 19147 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Important question on BOINC and a machine with Fermi/G200 cards