Author |
Message |
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
There is presently no work available on the server. |
|
|
CTAPbIiSend message
Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level
 Scientific publications
          |
although there are 532 WUs available, I'm not getting any single of them...
____________
|
|
|
ToniVolunteer moderator Project administrator Project developer Project tester Project scientist Send message
Joined: 9 Dec 08 Posts: 1006 Credit: 5,068,599 RAC: 0 Level
 Scientific publications
    |
I'm restarting the server, let us know if it does not clear the situation.
|
|
|
CTAPbIiSend message
Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level
 Scientific publications
          |
still nothing...
____________
|
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
CTAPbIi,
Are you dong a manual update?
Have you uploaded all finished GPUGrid tasks and reported the tasks?
I am now running a full complement of GPUGrid tasks. |
|
|
CTAPbIiSend message
Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level
 Scientific publications
          |
CTAPbIi,
Are you dong a manual update?
Have you uploaded all finished GPUGrid tasks and reported the tasks?
I am now running a full complement of GPUGrid tasks.
yep, manual update. it says "got 0 new tasks", that's it. and sure, I uploaded and reported all tasks over a day ago. Everything looks fine except I've got nothing to crunch.
But no reason to panic, let's say - my card got short vacation, ha-ha :-)
____________
|
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
CTAPbIi, Check your Boinc settings (Activity and Advanced Network Preferences) and try a system restart.
I am having no problems:
19/10/2010 13:32:09 GPUGRID Sending scheduler request: To fetch work.
19/10/2010 13:32:09 GPUGRID Requesting new tasks for GPU
19/10/2010 13:32:11 GPUGRID Scheduler request completed: got 1 new tasks
19/10/2010 13:32:14 GPUGRID Started download of f173r1-TONI_KKi4-8-LICENSE
19/10/2010 13:32:14 GPUGRID Started download of f173r1-TONI_KKi4-8-COPYRIGHT
19/10/2010 13:32:15 GPUGRID Finished download of f173r1-TONI_KKi4-8-LICENSE
19/10/2010 13:32:15 GPUGRID Finished download of f173r1-TONI_KKi4-8-COPYRIGHT
19/10/2010 13:32:15 GPUGRID Started download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_1
19/10/2010 13:32:15 GPUGRID Started download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_2
19/10/2010 13:32:28 GPUGRID Finished download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_1
19/10/2010 13:32:28 GPUGRID Started download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_3
19/10/2010 13:32:32 GPUGRID Finished download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_2
19/10/2010 13:32:32 GPUGRID Started download of f173r1-TONI_KKi4-8-pdb_file
19/10/2010 13:32:38 GPUGRID Finished download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_3
19/10/2010 13:32:38 GPUGRID Started download of f173r1-TONI_KKi4-8-psf_file
19/10/2010 13:32:40 GPUGRID Finished download of f173r1-TONI_KKi4-8-psf_file
19/10/2010 13:32:40 GPUGRID Started download of f173r1-TONI_KKi4-8-par_file
19/10/2010 13:32:58 GPUGRID Finished download of f173r1-TONI_KKi4-8-pdb_file
19/10/2010 13:32:58 GPUGRID Started download of f173r1-TONI_KKi4-8-conf_file_enc
19/10/2010 13:33:01 GPUGRID Finished download of f173r1-TONI_KKi4-8-conf_file_enc
19/10/2010 13:33:01 GPUGRID Started download of f173r1-TONI_KKi4-8-metainp_file
19/10/2010 13:33:03 GPUGRID Finished download of f173r1-TONI_KKi4-8-metainp_file
19/10/2010 13:33:03 GPUGRID Started download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_7
19/10/2010 13:33:05 GPUGRID Finished download of f173r1-TONI_KKi4-8-f173r1-TONI_KKi4-7-200-RND6033_7
19/10/2010 13:33:24 GPUGRID Finished download of f173r1-TONI_KKi4-8-par_file
I increased the cache, did an update and picked up a new task.
WU's available. No errors and no database access issues. |
|
|
|
I've had work allocated today for v6.11 (cuda31), but only 'no work available' for hosts which require v6.05 (cuda23). All hosts using the new 260.89 WHQL driver released yesterday. |
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
 Scientific publications
                        |
CTAPbIi, Check your Boinc settings (Activity and Advanced Network Preferences) and try a system restart.
I increased the cache, did an update and picked up a new task.
I also did all those things and my dual GPU box isn't being allowed ore than 1 WIU (it usually has 2 running and 2 waiting).
This it what I constantly get:
GPUGRID 10-19-10 08:23 Requesting new tasks for CPU and NVIDIA GPU
GPUGRID 10-19-10 08:23 Scheduler request completed: got 0 new tasks
GPUGRID 10-19-10 08:23 Message from GPUGRID: No work sent
Don't say there's no problem.
|
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
 Scientific publications
                        |
I've had work allocated today for v6.11 (cuda31), but only 'no work available' for hosts which require v6.05 (cuda23). All hosts using the new 260.89 WHQL driver released yesterday.
So it appears that only Fermi WUs are currently available? That fits with what I'm seeing.
|
|
|
CTAPbIiSend message
Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level
 Scientific publications
          |
I've had work allocated today for v6.11 (cuda31), but only 'no work available' for hosts which require v6.05 (cuda23). All hosts using the new 260.89 WHQL driver released yesterday.
hmmm, looks I'm not alone. and yep, I'm running 6.04 app (cuda23). Till now I've got nothing on my host
____________
|
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
 Scientific publications
                        |
Looks like they just added work for the rest of us non-fermi users. Thanks.
|
|
|
GDFVolunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message
Joined: 14 Mar 07 Posts: 1956 Credit: 629,356 RAC: 0 Level
 Scientific publications
     |
There are no fermi wus, all wus are sent to everybody.
gdf |
|
|
CTAPbIiSend message
Joined: 29 Aug 09 Posts: 175 Credit: 259,509,919 RAC: 0 Level
 Scientific publications
          |
confirming - I've got 2 WUs.
____________
|
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
Presently, I have no problems, though I did see a few access related issues early in the day. Perhaps these intermittent problems recurred for some people. I noted that the first thing that I downloaded was the master file.
I have 2 tasks running on my dual Fermi system (XP) + 2 in queue.
I have 1 task running on my GTX260 (Linux) + 1 in queue.
I have 4 tasks running on my quad GT240 system (Vista) + 1 in queue (because I don't want any more).
I have another 1 task running on a single GT240 (W7) system, and I don’t want any in the queue.
There is now 2,647 tasks on the server awaiting hosts. |
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
 Scientific publications
                        |
There are no fermi wus, all wus are sent to everybody.
gdf
Then why for several hours this morning did fermi cards get WUs and others did not?
|
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
There is a Fermi app and a non-Fermi app. The tasks that are sent out are the same. It is just that Fermi cards are identified as Fermi cards and download the Fermi app rather than the non-fermi app. Since the release of a Fermi app requested tasks are prioritized for Fermi cards; it looks for a Fermi task first, then a non-fermi task.
I picked up tasks this morning UK time, for my GTX260 on Linux and my GT240's about the same time as the tasks for my Fermi's (all within 10min).
Replacing drives takes time, and restarts are usually needed.
I sent work back, then was not able to, then was, then downloaded some tasks, then could not report, then could and then got more tasks. This is typical with server restarts. |
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
 Scientific publications
                        |
SK, server restarts were not the reason non-fermi cards were not receiving work yesterday AM for many hours. I manually polled for work many times. In addition my machines run a BOINC script every 15 minutes updating GPUGRID. On the server page work was shown as available and WUs were being sent out, just not for non-fermi cards. Others reported the same. You can close your eyes and deny what was happening but that doesn't change reality. There was a brief time when work was available earlier for non-fermi but if your machines missed that window it was lights out for hours. We really enjoy reporting problems and then being told that there were non. It's happened before and it's irritating.
|
|
|
ftpd Send message
Joined: 6 Jun 08 Posts: 152 Credit: 328,250,382 RAC: 0 Level
 Scientific publications
               |
7-11-2010 13:54:02 GPUGRID Started upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_0
7-11-2010 13:54:02 GPUGRID Started upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_1
7-11-2010 13:54:05 GPUGRID Finished upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_0
7-11-2010 13:54:05 GPUGRID Started upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_2
7-11-2010 13:54:09 GPUGRID Finished upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_1
7-11-2010 13:54:09 GPUGRID Started upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_3
7-11-2010 13:54:11 GPUGRID Finished upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_2
7-11-2010 13:54:11 GPUGRID Started upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_4
7-11-2010 13:54:14 GPUGRID Finished upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_3
7-11-2010 13:54:14 GPUGRID Started upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_7
7-11-2010 13:54:15 GPUGRID Finished upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_7
7-11-2010 13:54:15 GPUGRID Started upload of 269-KASHIF_HIVPR_n1_bound_cl_ba2-60-100-RND7041_2_0
7-11-2010 13:54:17 GPUGRID Finished upload of 269-KASHIF_HIVPR_n1_bound_cl_ba2-60-100-RND7041_2_0
7-11-2010 13:54:17 GPUGRID Started upload of 269-KASHIF_HIVPR_n1_bound_cl_ba2-60-100-RND7041_2_4
7-11-2010 13:54:35 GPUGRID Finished upload of 269-KASHIF_HIVPR_n1_bound_cl_ba2-60-100-RND7041_2_4
7-11-2010 13:54:35 GPUGRID Started upload of 269-KASHIF_HIVPR_n1_bound_cl_ba2-60-100-RND7041_2_7
7-11-2010 13:54:36 GPUGRID Finished upload of 269-KASHIF_HIVPR_n1_bound_cl_ba2-60-100-RND7041_2_7
7-11-2010 13:55:39 GPUGRID Finished upload of 274-KASHIF_HIVPR_n1_bound_so_ba1-61-100-RND5424_1_4
7-11-2010 14:00:31 GPUGRID work fetch resumed by user
7-11-2010 14:00:34 GPUGRID Sending scheduler request: To fetch work.
7-11-2010 14:00:34 GPUGRID Reporting 2 completed tasks, requesting new tasks for CPU and GPU
7-11-2010 14:00:35 GPUGRID Scheduler request completed: got 0 new tasks
7-11-2010 14:00:35 GPUGRID Message from server: No work sent
7-11-2010 14:00:36 GPUGRID work fetch suspended by user
7-11-2010 14:01:08 GPUGRID work fetch resumed by user
7-11-2010 14:01:11 GPUGRID Sending scheduler request: To fetch work.
7-11-2010 14:01:11 GPUGRID Requesting new tasks for CPU and GPU
7-11-2010 14:01:12 GPUGRID Scheduler request completed: got 0 new tasks
7-11-2010 14:01:12 GPUGRID Message from server: No work sent
7-11-2010 14:01:13 GPUGRID work fetch suspended by user
7-11-2010 14:02:11 GPUGRID work fetch resumed by user
7-11-2010 14:02:12 GPUGRID Sending scheduler request: To fetch work.
7-11-2010 14:02:12 GPUGRID Requesting new tasks for GPU
7-11-2010 14:02:14 GPUGRID Scheduler request completed: got 0 new tasks
7-11-2010 14:02:14 GPUGRID Message from server: No work sent
7-11-2010 14:02:50 GPUGRID Sending scheduler request: To fetch work.
7-11-2010 14:02:50 GPUGRID Requesting new tasks for CPU
7-11-2010 14:02:51 GPUGRID Scheduler request completed: got 0 new tasks
7-11-2010 14:02:51 GPUGRID Message from server: No work sent
7-11-2010 14:04:26 GPUGRID Sending scheduler request: To fetch work.
7-11-2010 14:04:26 GPUGRID Requesting new tasks for CPU
7-11-2010 14:04:28 GPUGRID Scheduler request completed: got 0 new tasks
7-11-2010 14:04:28 GPUGRID Message from server: No work sent
7-11-2010 14:04:45 GPUGRID update requested by user
7-11-2010 14:04:49 GPUGRID Sending scheduler request: Requested by user.
7-11-2010 14:04:49 GPUGRID Requesting new tasks for CPU and GPU
7-11-2010 14:04:51 GPUGRID Scheduler request completed: got 0 new tasks
7-11-2010 14:04:51 GPUGRID Message from server: Not sending work - last request too recent: 22 sec
7-11-2010 14:05:26 GPUGRID Sending scheduler request: To fetch work.
7-11-2010 14:05:26 GPUGRID Requesting new tasks for CPU and GPU
7-11-2010 14:05:27 GPUGRID Scheduler request completed: got 0 new tasks
7-11-2010 14:05:27 GPUGRID Message from server: No work sent
No no work availabel? The server says more then 1,000 wu's, but no download!
Windows-xp-pr gtx295 latest nvidia-driver boinc-manager 06.10.58
____________
Ton (ftpd) Netherlands |
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
You presently have 2 tasks in progress on that system.
I would expect you will get new tasks closer to the time you are due to finish present tasks.
I think you are not picking up two tasks for your queue due to the server deciding not to allocate them; based on things such as your estimated remaining runtime, the number of failed tasks and possibly the server being busy sending tasks to other systems (with higher priority).
|
|
|
ftpd Send message
Joined: 6 Jun 08 Posts: 152 Credit: 328,250,382 RAC: 0 Level
 Scientific publications
               |
Hi Kev,
I always get normally 4 wu's at any time, whatever this machine is doing.
So this is strange for me!
____________
Ton (ftpd) Netherlands |
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
Hi Ton,
It's picked up two more now.
You might want to increase your cache slightly, to see if that helps.
Cheers,
Kev |
|
|
|
You might want to increase your cache slightly, to see if that helps.
Nothing to do with cache. He's already "Requesting new tasks for GPU".
That message log is there for a purpose - to aid diagnosis, by directing you to look in the right place. In this case, on the server - hence the post in this message board area. |
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
The scheduler request specifies an amount of work in seconds, <work_req_seconds>. If the cache is increased the next update will include this increase. If the feeder has a task that will not take longer it will issue the task. Of course there is also a limit of 2 tasks per GPU. Once you have 2 tasks per GPU increasing the cache further will not result in more tasks being issues. |
|
|
|
The scheduler request specifies an amount of work in seconds, <work_req_seconds>. If the cache is increased the next update will include this increase. If the feeder has a task that will not take longer it will issue the task. Of course there is also a limit of 2 tasks per GPU. Once you have 2 tasks per GPU increasing the cache further will not result in more tasks being issues.
The phrase in red is nonsense. If you turn on the appropriate debug flags, you'll see
07-Nov-2010 21:49:16 [GPUGRID] [sched_op] Starting scheduler request
07-Nov-2010 21:49:16 [GPUGRID] [work_fetch] request: 0.00 sec CPU (0.00 sec, 0.00) NVIDIA GPU (76.16 sec, 0.00)
07-Nov-2010 21:49:16 [GPUGRID] Sending scheduler request: To fetch work.
07-Nov-2010 21:49:16 [GPUGRID] Requesting new tasks for NVIDIA GPU
07-Nov-2010 21:49:16 [GPUGRID] [sched_op] CPU work request: 0.00 seconds; 0.00 CPUs
07-Nov-2010 21:49:16 [GPUGRID] [sched_op] NVIDIA GPU work request: 76.16 seconds; 0.00 GPUs
07-Nov-2010 21:49:17 [GPUGRID] Scheduler request completed: got 1 new tasks
07-Nov-2010 21:49:17 [GPUGRID] [sched_op] Server version 611
07-Nov-2010 21:49:17 [GPUGRID] Project requested delay of 31 seconds
07-Nov-2010 21:49:17 [GPUGRID] [sched_op] estimated total CPU task duration: 0 seconds
07-Nov-2010 21:49:17 [GPUGRID] [sched_op] estimated total NVIDIA GPU task duration: 20774 seconds
In other words, if you request work, you are eligible to receive sufficient quanta of work to satisfy that request. If I had increased the cache, to request 176 seconds, 1076 seconds, or 10076 seconds, the outcome would have been no different.
There may be other restrictions. Here, the maximum of 2 tasks per GPU that you mention. Work should not be issued if the host has no chance of finishing it before the deadline, and so on. |
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
Clearly my take on the work fetch routine is out; ask for 76sec of work and get 20K sec. I thought you would ask for 20K sec and get 1 task, but I guess that would prevent tasks over 10days from ever being sent out. Are you totally sure that increasing the cache makes no difference to the servers calculation to send or not send work, that it does not increase the priority? On some CPU projects I have to increase the cache to about 5days to get CPU work, at 2 or 3 days it just kept asking for GPU tasks, but as soon as I raise it to 5 days work is sent out.
I don’t know exactly how the GPUGrid servers determine to send or not send work, but I know the server keeps computer information (system specs and performance information) about individual systems that allows it to determine if and what tasks to send. This includes CPU type, number of processors (cores/threads), co-processors (GPU type), Operating System, Client Version, Memory, hard drive space, RAM, average credit, upload and download average rates, % time Boinc client runs, Internet connection percentage, a task duration correction factor and the maximum daily WU quota per CPU (30 per day), which actually means you can only have 30 failures per day before the server will stop sending you tasks.
While we can all see things such as upload and download rates and ask for logs we cannot see how much hard drive space is free on another person’s computer or how busy the server was at any given time, and got 0 new tasks / no work sent, is not much to go on.
|
|
|
|
On some CPU projects I have to increase the cache to about 5days to get CPU work, at 2 or 3 days it just kept asking for GPU tasks, but as soon as I raise it to 5 days work is sent out.
...
got 0 new tasks / no work sent, is not much to go on.
You need to get very clear, in your own mind if nowhere else, the difference between "not requesting" and "not sending out". You can see from my log the extra information you get from the [work_fetch] and [sched_op] debug flags: probably [sched_op] is clearest. That should help you to answer your own question. |
|
|
|
GPU Results ready to send 0 0 0
<frowny face>
____________
Thanks - Steve |
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
There were at least 127 added - now 103.
|
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
 Scientific publications
                        |
Now I'm getting this message on one GT 240 machine and it won't DL WUs:
GPUGRID 11-12-10 13:20 update requested by user
GPUGRID 11-12-10 13:20 Sending scheduler request: Requested by user.
GPUGRID 11-12-10 13:20 Requesting new tasks for CPU and NVIDIA GPU
GPUGRID 11-12-10 13:20 Scheduler request completed: got 0 new tasks
GPUGRID 11-12-10 13:20 Message from GPUGRID: No work sent
GPUGRID 11-12-10 13:20 Message from GPUGRID: CUDA version 3.1 needed
Here's the GPU/driver info:
11-12-10 13:12 NVIDIA GPU 0: GeForce GT 240 (driver version 19745, CUDA version 3000, compute capability 1.2, 512MB, 288 GFLOPS peak)
|
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
 Scientific publications
                        |
Now I'm getting this message on one GT 240 machine and it won't DL WUs:
GPUGRID 11-12-10 13:20 update requested by user
GPUGRID 11-12-10 13:20 Sending scheduler request: Requested by user.
GPUGRID 11-12-10 13:20 Requesting new tasks for CPU and NVIDIA GPU
GPUGRID 11-12-10 13:20 Scheduler request completed: got 0 new tasks
GPUGRID 11-12-10 13:20 Message from GPUGRID: No work sent
GPUGRID 11-12-10 13:20 Message from GPUGRID: CUDA version 3.1 needed
Here's the GPU/driver info:
11-12-10 13:12 NVIDIA GPU 0: GeForce GT 240 (driver version 19745, CUDA version 3000, compute capability 1.2, 512MB, 288 GFLOPS peak)
It finally got a WU after an hour of retrying. Why was it getting this message:
GPUGRID 11-12-10 13:20 Message from GPUGRID: CUDA version 3.1 needed
|
|
|
|
GPU Results ready to send
acemdlong: 0
acemd2: 0
Acemd2 sometimes shows 3 available WU.
Will there be enough new WUs over this weekend? |
|
|
|
Yep, just grabbed the last workunit. Hopefully they'll stuff the server with more workunits soon. |
|
|
skgivenVolunteer moderator Volunteer tester
 Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
 Scientific publications
                            |
There seems to be a few comming in now and again. They don't hang around long though. I think an automatic task generation system is being employed which creates new tasks from returned tasks. So if you dont get a task straight away, you might get one in a few minutes. If you check the server status and do an update when there are tasks you should pick one up.
I typically run the odd MW task during what are fairly infrequent shortages here. |
|
|
ToniVolunteer moderator Project administrator Project developer Project tester Project scientist Send message
Joined: 9 Dec 08 Posts: 1006 Credit: 5,068,599 RAC: 0 Level
 Scientific publications
    |
I just created 1000 new workunits (KKAL2), with more on the way. |
|
|
|
GPU Results ready to send
acemdlong: 0
acemd2: 55
|
|
|
|
The number of unsent workunits at the moment is very very very low.
ACEMD2 GPU molecular dynamics: 2
ACEMD beta version: 0
Long runs (8-12 hours on fastest card): 1
It's been similarly low for the last couple of days, nevertheless my crunchers were able to get enough work.
But now it seems to change for the worse, and soon they will run dry.
When do you plan to issue new workunits? |
|
|