Advanced search

Message boards : Number crunching : Not getting new work

Author Message
STARBASEn
Avatar
Send message
Joined: 17 Feb 09
Posts: 91
Credit: 1,603,303,394
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 47522 - Posted: 3 Jul 2017 | 0:23:13 UTC
Last modified: 3 Jul 2017 | 0:30:06 UTC

I haven't received new work for nearly a day. GPU's are asking but the reply is consistently 'Project has no tasks available'. Any one else experiencing this? Is this a normal phenomenon periodically?

Edit: I stand corrected, I did receive one WU about 10 hours ago but that's almost finished so when it is done, all my GPU's will be idle and waiting.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47527 - Posted: 3 Jul 2017 | 16:40:53 UTC - in response to Message 47522.

... Is this a normal phenomenon periodically?
yes, it is

Profile Misfit
Avatar
Send message
Joined: 23 May 08
Posts: 33
Credit: 610,551,356
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47528 - Posted: 3 Jul 2017 | 19:36:45 UTC

If this is your only GPU project you can attach to a second GPU project and set its resource share to zero. That way it'll only run when this project is out of work.
____________
me@rescam.org

ChristianVirtual
Send message
Joined: 16 Aug 14
Posts: 17
Credit: 378,346,925
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwat
Message 47529 - Posted: 3 Jul 2017 | 21:00:26 UTC

How long did that dry period lasts in the past ? I have one math-oriented backup project running but it feels like waste to energy to my.
Based on experience: how many days they need to refill the queue ? Or is their research focus changed finally ?

STARBASEn
Avatar
Send message
Joined: 17 Feb 09
Posts: 91
Credit: 1,603,303,394
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 47530 - Posted: 3 Jul 2017 | 22:49:32 UTC

yes, it is

Ah, thanks for the reply. At least now I know it wasn't in my setup.

If this is your only GPU project you can attach to a second GPU project and set its resource share to zero. That way it'll only run when this project is out of work.

Yes at this time. I don't mind a pause here and there just so I know what the norm is. Good suggestion, I may look into some other GPU projects.
____________

Crunching since Feb 2003 (United Devices, Find-a-Drug)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47536 - Posted: 5 Jul 2017 | 20:14:03 UTC

Over the past they mostly ran dry over the weekend and filled the queue at the beginning of the week. That has not yet happened this time, though.

Personally I run Einstein@Home at 1/100 the ressource share of GPU-Grid, which works fine. This way BOINC builds up a larger Einstein buffer if GPU-Grid is down than at ressource share 0. Make sure to max your GPU memory clock and run 2 WUs concurrently (can be set in the profile).

MrS
____________
Scanning for our furry friends since Jan 2002

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,617,042,755
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 47537 - Posted: 5 Jul 2017 | 20:29:58 UTC

The 2 WUs per GPU might be a cause for these shortages as each new WU is only created once the previous one is uploaded and validated. I suggest having only one WU per GPU

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47541 - Posted: 6 Jul 2017 | 4:43:37 UTC - in response to Message 47537.

The 2 WUs per GPU might be a cause for these shortages as each new WU is only created once the previous one is uploaded and validated. I suggest having only one WU per GPU

Well, if there are no WUs available at all (like during the past few days), it makes not difference as to whether there are 2 WUs or 1 WU per GPU.
The well has run dry :-(

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47542 - Posted: 6 Jul 2017 | 5:13:36 UTC - in response to Message 47536.

Over the past they mostly ran dry over the weekend and filled the queue at the beginning of the week. That has not yet happened this time, though.

I suspect that this time it has to do with the vacation period that has started recently.

But, as said before, it would just be nice if we received any kind of indication as to when we can expect new WUs.

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 3,772,896,065
RAC: 4,765,302
Level
Arg
Scientific publications
watwatwatwatwat
Message 47553 - Posted: 8 Jul 2017 | 14:04:40 UTC

Has this been the 1st shortage since admins mentioned stopping DC support? Might just be the end for good.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47555 - Posted: 8 Jul 2017 | 18:32:59 UTC - in response to Message 47553.

Has this been the 1st shortage since admins mentioned stopping DC support?

hm, where/when did they mention this?

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,599,311,851
RAC: 8,786,170
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47556 - Posted: 8 Jul 2017 | 18:59:20 UTC - in response to Message 47555.

Has this been the 1st shortage since admins mentioned stopping DC support?

hm, where/when did they mention this?

Here, nearly a month ago.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47557 - Posted: 9 Jul 2017 | 10:58:03 UTC - in response to Message 47556.

Has this been the 1st shortage since admins mentioned stopping DC support?

hm, where/when did they mention this?

Here, nearly a month ago.

ah, this is what you mean; I remember this posting by Stefan.
However, I don't think that many of the current problems are caused by BOINC. It's basically the new buggy crunching software, which had to be put together in a hurry in April after the former one stopped working from one moment to the other.

Then, there are a few other things that would help us crunchers - and again, they have nothing to do with BOINC.

fractal
Send message
Joined: 16 Aug 08
Posts: 87
Credit: 1,248,879,715
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47572 - Posted: 10 Jul 2017 | 22:14:27 UTC

Has anyone received work for the past week or three? I see units show up on the server status page but I don't get any.

My machines have run GPUGrid for several years and they haven't changed. I did have them on another project for 6 months but haven't had any work since I came back.

mmonnin
Send message
Joined: 2 Jul 16
Posts: 332
Credit: 3,772,896,065
RAC: 4,765,302
Level
Arg
Scientific publications
watwatwatwatwat
Message 47573 - Posted: 10 Jul 2017 | 22:35:33 UTC - in response to Message 47556.

Has this been the 1st shortage since admins mentioned stopping DC support?

hm, where/when did they mention this?

Here, nearly a month ago.


Correct thread I was thinking of. Post was a bit farther up:
https://www.gpugrid.net/forum_thread.php?id=4584&nowrap=true#47404

Fixing bugs with BOINC is relatively pointless from our perspective (and time-intensive). We are considering rather other options like moving out of it, but don't ask when or how as it's more an idea than a scheduled plan.

I am sorry for those inconveniences that this causes.

The reason we cannot address technical issues with BOINC is that we don't have anyone in the lab anymore who knows his way around it and that priorities are higher on getting scientific work done. Of course you have a point that this will eventually bite us in the ass since we won't be able to do scientific work without crunchers but it's a tricky thing to manage.


I agree though. Other GPU apps run better in many ways. The issues are with the project, but not because GPUGrid is using BOINC as its distribution channel.

OTH, I got 2 tasks just about 2 hours ago.

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,195,561,293
RAC: 1,590,531
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47575 - Posted: 10 Jul 2017 | 23:33:00 UTC - in response to Message 47572.
Last modified: 10 Jul 2017 | 23:33:50 UTC

fractal,
There has been a problem with the driver around the time you are not receiving new WUs. Your diver is still 367.55 on one of the cards and 355.11 on the other with Cuda65. Please try to up-date your driver. I am using 375.66 (Cuda80) on a GTX1070 in Lubuntu, and do not have any problems with downloading and processing WUs.
klepel

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47576 - Posted: 11 Jul 2017 | 4:47:06 UTC - in response to Message 47575.

fractal,
There has been a problem with the driver around the time you are not receiving new WUs. Your diver is still 367.55 on one of the cards and 355.11 on the other with Cuda65.

from what I remember back in April when the new crunching software was introduced, at least driver version 381.65 is needed.
Meanwhile, there are even newer versions available from NVIDIA.

fractal
Send message
Joined: 16 Aug 08
Posts: 87
Credit: 1,248,879,715
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47577 - Posted: 11 Jul 2017 | 6:05:01 UTC - in response to Message 47575.
Last modified: 11 Jul 2017 | 6:07:07 UTC

fractal,
There has been a problem with the driver around the time you are not receiving new WUs. Your diver is still 367.55 on one of the cards and 355.11 on the other with Cuda65. Please try to up-date your driver. I am using 375.66 (Cuda80) on a GTX1070 in Lubuntu, and do not have any problems with downloading and processing WUs.
klepel

I updated to the latest public version from nvidia, 375.66, and am getting a different message. There is no work so I'll check again tomorrow. Thanks for the suggestion.

FWIW, http://www.gpugrid.net/join.php says 343 is good enough.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 47578 - Posted: 11 Jul 2017 | 6:13:01 UTC - in response to Message 47577.

I updated to the latest public version from nvidia, 375.66,

hm, this driver version is definitely an older one. Which OS are you using?

fractal
Send message
Joined: 16 Aug 08
Posts: 87
Credit: 1,248,879,715
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47581 - Posted: 11 Jul 2017 | 20:44:16 UTC
Last modified: 11 Jul 2017 | 20:47:16 UTC

Much improved. Both of my systems are crunching cuda80 units.

I am running Ubuntu 12.04 LTS on both machines. I went to the nvidia download page and got the latest released issue for the 970 on linux 64.

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,195,561,293
RAC: 1,590,531
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 47582 - Posted: 11 Jul 2017 | 22:24:23 UTC

great news!

Post to thread

Message boards : Number crunching : Not getting new work

//