Author |
Message |
|
I recently replaced a pair of 750TIs with 2 970s. Now for some reason I can't get any extra tasks to download. I have my cache set to 4 days but I keep getting the following message when I try to fetch more work. From log:
5730 GPUGRID 1/18/2016 7:53:37 AM Requesting new tasks for NVIDIA GPU
5732 GPUGRID 1/18/2016 7:53:39 AM Scheduler request completed: got 0 new tasks
5733 GPUGRID 1/18/2016 7:53:39 AM No tasks sent
5734 GPUGRID 1/18/2016 7:53:39 AM This computer has reached a limit on tasks in progress
The 970s are running 2 tasks each and complete them both in less than 20 hours. Is running 2 tasks per card causing this issue or am I missing something? |
|
|
|
The 970s are running 2 tasks each and complete them both in less than 20 hours. Is running 2 tasks per card causing this issue or am I missing something?
Yes, you already have the maximum amount of WU's allowed per GPU, which is 2. |
|
|
|
Thanks. It wouldn't be an issue if downloads would complete but for me they keep getting hung with an http error. Sometimes it takes hours to get all the files to run a task. Only does it here and it really wastes time. |
|
|
|
11.03.2016 19:09:52 | GPUGRID | update requested by user
11.03.2016 19:09:57 | GPUGRID | Sending scheduler request: Requested by user.
11.03.2016 19:09:57 | GPUGRID | Requesting new tasks for CPU and NVIDIA GPU
11.03.2016 19:10:02 | GPUGRID | Scheduler request completed: got 0 new tasks
11.03.2016 19:10:02 | GPUGRID | No tasks sent
comp1 - Installed 3*980Ti GPU, 0,5CPU+0,33GPU, 6 tasks on work, 1 GPU Free.
comp2 - Installed 3*980Ti GPU, 0,5CPU+0,33GPU, 4 tasks on work, 1,5 GPU Free.
2 task not fullload my GPU, Not New tasks. |
|
|
|
comp1 - Installed 3*980Ti GPU, 0,5CPU+0,33GPU, 6 tasks on work, 1 GPU Free.
comp2 - Installed 3*980Ti GPU, 0,5CPU+0,33GPU, 4 tasks on work, 1,5 GPU Free.
2 task not fullload my GPU, Not New tasks. There's a limit by the GPUGrid project of 2 tasks per GPU, it won't send more, so there's no point to set 0.33GPU.
You should run only 2 tasks simultaneously, and set the SWAN_SYNC environmental variable to make the GPUGrid app use a full CPU core to feed the GPU, and you should reduce the number of CPU tasks accordingly.
Here's how to set the SWAN_SYNC environmental variable to make the GPUGrid client use a full CPU core:
Start button ->
type systempropertiesadvanced to the search box ->
press enter, or click on the result ->
click on the "Environmental variables" button near the bottom ->
click on the "New" button near the bottom (system variables section) ->
variable name: SWAN_SYNC ->
variable value: 1 ->
click OK 3 times ->
exit BOINC manager with stopping all scientific applications ->
restart BOINC manager |
|
|
|
Snks.
After crash and reload project - not new jobs.
12.03.2016 20:03:54 | GPUGRID | [work_fetch] share 0.000 project is backed off (resource backoff: 573.49, inc 600.00)
12.03.2016 20:03:54 | | [work_fetch] ------- end work fetch state -------
12.03.2016 20:03:54 | | [work_fetch] No project chosen for work fetch
12.03.2016 20:04:04 | | NOTICES::write: seqno 6, refresh false, 4 notices
12.03.2016 20:04:57 | | [work_fetch] ------- start work fetch state -------
12.03.2016 20:04:57 | | [work_fetch] target work buffer: 4320.00 + 4320.00 sec
12.03.2016 20:04:57 | | [work_fetch] --- project states ---
12.03.2016 20:04:57 | GPUGRID | [work_fetch] REC 0.000 prio -0.000 can request work
12.03.2016 20:04:57 | | [work_fetch] --- state for CPU ---
12.03.2016 20:04:57 | | [work_fetch] shortfall 138240.00 nidle 16.00 saturated 0.00 busy 0.00
12.03.2016 20:04:57 | GPUGRID | [work_fetch] share 0.000 project is backed off (resource backoff: 576.77, inc 600.00)
12.03.2016 20:04:57 | | [work_fetch] --- state for NVIDIA GPU ---
12.03.2016 20:04:57 | | [work_fetch] shortfall 25920.00 nidle 3.00 saturated 0.00 busy 0.00
12.03.2016 20:04:57 | GPUGRID | [work_fetch] share 0.000 project is backed off (resource backoff: 510.51, inc 600.00)
12.03.2016 20:04:57 | | [work_fetch] ------- end work fetch state -------
12.03.2016 20:04:57 | | [work_fetch] No project chosen for work fetch
12.03.2016 20:05:04 | | NOTICES::write: seqno 6, refresh false, 4 notices |
|
|
|
Snks.
After crash and reload project - not new jobs.
12.03.2016 20:03:54 | GPUGRID | [work_fetch] share 0.000 project is backed off (resource backoff: 573.49, inc 600.00)
12.03.2016 20:03:54 | | [work_fetch] ------- end work fetch state -------
12.03.2016 20:03:54 | | [work_fetch] No project chosen for work fetch
12.03.2016 20:04:04 | | NOTICES::write: seqno 6, refresh false, 4 notices
12.03.2016 20:04:57 | | [work_fetch] ------- start work fetch state -------
12.03.2016 20:04:57 | | [work_fetch] target work buffer: 4320.00 + 4320.00 sec
12.03.2016 20:04:57 | | [work_fetch] --- project states ---
12.03.2016 20:04:57 | GPUGRID | [work_fetch] REC 0.000 prio -0.000 can request work
12.03.2016 20:04:57 | | [work_fetch] --- state for CPU ---
12.03.2016 20:04:57 | | [work_fetch] shortfall 138240.00 nidle 16.00 saturated 0.00 busy 0.00
12.03.2016 20:04:57 | GPUGRID | [work_fetch] share 0.000 project is backed off (resource backoff: 576.77, inc 600.00)
12.03.2016 20:04:57 | | [work_fetch] --- state for NVIDIA GPU ---
12.03.2016 20:04:57 | | [work_fetch] shortfall 25920.00 nidle 3.00 saturated 0.00 busy 0.00
12.03.2016 20:04:57 | GPUGRID | [work_fetch] share 0.000 project is backed off (resource backoff: 510.51, inc 600.00)
12.03.2016 20:04:57 | | [work_fetch] ------- end work fetch state -------
12.03.2016 20:04:57 | | [work_fetch] No project chosen for work fetch
12.03.2016 20:05:04 | | NOTICES::write: seqno 6, refresh false, 4 notices
12.03.2016 20:04:57 | GPUGRID | [work_fetch] share 0.000 project is backed off (resource backoff: 510.51, inc 600.00)
This means that you have asked GPUGrid for work in the past, and it didn't have any, so to prevent overloading the server, your client has been told to backoff. It has 510.51 seconds (8.5 minutes) remaining.
GPUGrid runs out of work sometimes. I'd recommend attaching to a 0-resource-share backup GPU project, so your 3 idle GPUs can still do work during the GPUGrid work outages. |
|
|