Message boards : Server and website : Down to 5 tasks in progress
Author | Message |
---|---|
I wonder who those last five tasks belong to. I wonder if we'll see the number of tasks go down to zero before tasks start flowing again. | |
ID: 56266 | Rating: 0 | rate:
![]() ![]() ![]() | |
that hasn't always been the requirement, but it could be possible if the admins wish to completely close out one data set before starting another. | |
ID: 56267 | Rating: 0 | rate:
![]() ![]() ![]() | |
As Richard posted, it shouldn't be necessary to remove all tasks before deploying a new application. | |
ID: 56268 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hoping new tasks come soon. I bought a new graphics card not too long ago, to speed up video rendering. But also in mind to crank out GPUGrid work when I wasn't otherwise using it. Currently it's working on Einstein stuff, but I'd rather be working on curing diseases, and helping people. | |
ID: 56269 | Rating: 0 | rate:
![]() ![]() ![]() | |
I've been running Folding@Home while waiting for new tasks. Plenty of good projects focusing on disease on there. | |
ID: 56270 | Rating: 0 | rate:
![]() ![]() ![]() | |
I don't think you can run both Folding's client and the BOINC client on the same hardware at the same time, or can you? | |
ID: 56271 | Rating: 0 | rate:
![]() ![]() ![]() | |
I don't think you can run both Folding's client and the BOINC client on the same hardware at the same time, or can you? Actually, you can run both at the same time, but it slows down both projects. I have done it myself. You can throttle Folding to either Light, Medium or Full Power mode, as well. | |
ID: 56272 | Rating: 0 | rate:
![]() ![]() ![]() | |
Actually, you can run both at the same time, but it slows down both projects. I have done it myself. Me too. Right now I'm running all my GPUs for F@H, but I leave my slowest one ready to download any GPUGRID tasks available. It will let me know when to switch my faster cards back to this project after finishing up there, because new WUs are available. Everyone who wants to do medical research on their new RTX 30xx cards will find they kick butt doing CUDA Fahcore tasks. You can use dissimilar GPUs in the same host without any restart errors, too. The downsides are less statistical information, a rather clunky GUI, and points do not apply to BOINC credit anymore. There are currently over 2.7M "donors" crunching. | |
ID: 56273 | Rating: 0 | rate:
![]() ![]() ![]() | |
I wonder if we'll see the number of tasks go down to zero before tasks start flowing again. Previously, the batches have been slightly overlapped (if I recall correctly), so after a year of being in the crunching stages, I suspect this project may be finished for our purposes and moving towards publishing. (We'll see if the limb I just occupied breaks under the weight of my speculation.) ----- Down to 2 tasks now, probably waiting to expire on unattended hosts. | |
ID: 56274 | Rating: 0 | rate:
![]() ![]() ![]() | |
1 outstanding task left. | |
ID: 56277 | Rating: 0 | rate:
![]() ![]() ![]() | |
... and points do not apply to BOINC credit anymore. was this the case ever before? | |
ID: 56278 | Rating: 0 | rate:
![]() ![]() ![]() | |
... and points do not apply to BOINC credit anymore. Never. ____________ Reno, NV Team: SETI.USA | |
ID: 56282 | Rating: 0 | rate:
![]() ![]() ![]() | |
... and points do not apply to BOINC credit anymore. If you check the BOINC combined statistics page https://boinc.netsoft-online.com/e107_plugins/boinc/bp.php?project=11 , you'll find the original folding@home stats from when it was a BOINC project. I'm not sure about the timing because it was retired before I resumed crunching in 2019 after quitting SETI in its early stage. I will guess that when Greg Bowman left Stanford and teamed up with Joe Coffland at Washington U. St. Louis they decided to develop a new platform of their own. Greg is a Stanford grad, so the competition between Stanford and Berkley might also be a factor. If you ask Dr Bowman I'm sure he will explain as he has twice answered my questions when I have emailed him. There is a link to his email on the site. I personally don't care about the credit aspect of crunching, but it would be more convenient for all of us here if the project was still on Berkley's platform. The project is partnered with the Covid Moonshot project to develop an anti-viral solution to the pandemic. This makes the credit seem insignificant to me. Another plus that I forgot to mention is that FAHcore uses OpenCL also, so AMD GPUs are welcome, even when mixed with NVidia cards in a host. I recently noticed that F@h recognizes my Intel GPUs also, but as yet there are no WUs running on them. | |
ID: 56283 | Rating: 0 | rate:
![]() ![]() ![]() | |
And still, one task just sitting out there in progress. Too bad they can't pull it and send it to someone else. | |
ID: 56291 | Rating: 0 | rate:
![]() ![]() ![]() | |
If it’s not returned within 5 days, it will automatically be sent to someone else | |
ID: 56293 | Rating: 0 | rate:
![]() ![]() ![]() | |
Now, we finally reached 0. Looking forward to the creation of new WUs. My GPUs are waiting. | |
ID: 56299 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hopefully! | |
ID: 56300 | Rating: 0 | rate:
![]() ![]() ![]() | |
ID: 56301 | Rating: 0 | rate:
![]() ![]() ![]() | |
That one single task completion was enough to bring the project out of retirement listing on statseb, lol. It was set to retired status a couple of days ago while that last task was in progress. This pleases the side of me that is completionist. | |
ID: 56302 | Rating: 0 | rate:
![]() ![]() ![]() | |
Now would be a good time to give the database a thorough old spring cleaning. I've still got 12 tasks listed (mostly errors), dating back to 2013! | |
ID: 56303 | Rating: 0 | rate:
![]() ![]() ![]() | |
Interesting: Tasks in progress have risen to 10 this evening. 0_0-GERARD_pocket_discovery_56fe7d99_19b6_4687_86d7_bcb3f4e80b33-0-2-RND6221_0 A second host has received the same task at nearly the same time. Perhaps these new tasks are programmed to be double checked by a wingman (?) ![]() | |
ID: 56309 | Rating: 0 | rate:
![]() ![]() ![]() | |
However, versions for current Gpugrid applications remain unchanged: Probably we will have to wait a bit more for a major applications upgrade. | |
ID: 56310 | Rating: 0 | rate:
![]() ![]() ![]() | |
Perhaps these new tasks are programmed to be double checked by a wingman (?) I can answer now my own question (2021-01-20 17:00 UTC). There are currently "on the air" the following Work Units: WU# 27020952 WU# 27020953 WU# 27020954 WU# 27020955 WU# 27020956 WU# 27020957 WU# 27020958 WU# 27020959 WU# 27020960 They are not responding to a regular double checking verification, because some tasks have been awarded their credits (129.906,00) before the second paired task is finished. I think that this is a limited test in the try of getting a credit consistency between different GPU models. Also, I've found that GPU models assigned with at least one task are: GTX 1660 Ti, GTX1050 Ti, GTX 1050, TESLA K80, GTX 1060 6GB, RTX 2070, GT 710, GTX 660 Ti, GTX 1070, GTX 960, GTX 1650 SUPER, QUADRO M2000M, RTX 2060 SUPER, GTX 1060 3GB I find this wide range of GPU models compared to the few amount of tasks in progress to be out of statisticals. I tend to think that there is any intentionality in this aspect... | |
ID: 56312 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hmmm I'm having difficulty with getting WUs 2 weeks no luck | |
ID: 56314 | Rating: 0 | rate:
![]() ![]() ![]() | |
Current test with GERARD WUs seems to keep (slowly) going on. There are currently (2021-01-24 11:30 UTC) 14 Work Units in progress. | |
ID: 56318 | Rating: 0 | rate:
![]() ![]() ![]() | |
Hmmm I'm having difficulty with getting WUs 2 weeks no luck ____________ There is very little work for GPUGRID hosts at this time. (I can't tell if that is your problem because you hide your computers.) If you check the server status regularly, you will stay well informed. Hopefully I've helped. | |
ID: 56319 | Rating: 0 | rate:
![]() ![]() ![]() | |
Déjà vu | |
ID: 56321 | Rating: 0 | rate:
![]() ![]() ![]() | |
It's the drought before the deluge! My hole dried up weeks ago! Ooo eerrr missus. Oh, and @d_worms https://www.gpugrid.net/show_user.php?userid=75061, you have no work units because that massive horrendously tasteless and kitch sig' banner is blocking out everything, including sunlight! | |
ID: 56322 | Rating: 0 | rate:
![]() ![]() ![]() | |
So much processor time, just going to waste. | |
ID: 56326 | Rating: 0 | rate:
![]() ![]() ![]() | |
I've noticed that today about midday (UTC), about 600 ADRIA tasks have been thrown to the field. | |
ID: 56378 | Rating: 0 | rate:
![]() ![]() ![]() | |
to my big surprise, I just noticed that one of my machines received such a Adria task. | |
ID: 56379 | Rating: 0 | rate:
![]() ![]() ![]() | |
to my big surprise, I just noticed that one of my machines received such a Adria task. I suspect that the task will miss the deadline (Feb. 14, 13:57 hrs) by several hours, if not even more. So most probably it would make sense to abort it, in order not to waste 5 days of computation time for nothing :-( Too bad | |
ID: 56381 | Rating: 0 | rate:
![]() ![]() ![]() | |
I suspect that the task will miss the deadline (Feb. 14, 13:57 hrs) by several hours, if not even more. So most probably it would make sense to abort it... Certainly, it is a risk. But hitting the deadline is not the only condition to not getting credits for the task. The other condition is that, once deadline is reached, the resend for the same task be catched by other host, and be reported first than yours. This leaves a margin of at least ten more hours, in the worst case that it hits a high end GPU. It is your decision whether bet for 348,750.00 credits or not... | |
ID: 56382 | Rating: 0 | rate:
![]() ![]() ![]() | |
the bad thing is that this task was downloaded by one my two machines with a GTX750ti inside. | |
ID: 56383 | Rating: 0 | rate:
![]() ![]() ![]() | |
These ADRIA tasks of current batch are extremely long ones. | |
ID: 56384 | Rating: 0 | rate:
![]() ![]() ![]() | |
It can be a good challenge for overclockers to show off! | |
ID: 56385 | Rating: 0 | rate:
![]() ![]() ![]() | |
I have a GTX 1660, but of course, I didn't get any of the work. :( | |
ID: 56386 | Rating: 0 | rate:
![]() ![]() ![]() | |
Though it does look like more work units are appearing. | |
ID: 56387 | Rating: 0 | rate:
![]() ![]() ![]() | |
It can be a good challenge for overclockers to show off! ~60,000 on my RTX 2070; not really overclocked. power limited to 150W to increase power efficiency. it runs only just slower than stock though since I apply an OC to bring clock speeds back up from what is lost to power limit. https://www.gpugrid.net/result.php?resultid=32507258 Kevvy's 2080/Super system(s) did them even faster. but I don't know his exact config. 2080S, 55k s: http://www.gpugrid.net/result.php?resultid=32507109 2080, 48k s: http://www.gpugrid.net/result.php?resultid=32507309 2080ti, 41k s: http://www.gpugrid.net/result.php?resultid=32507185 [AF>Libristes] hermes: 2080ti, 36k s: http://www.gpugrid.net/result.php?resultid=32507426 these tasks don't appear to be standard sized, some harder/longer than others. ____________ ![]() | |
ID: 56390 | Rating: 0 | rate:
![]() ![]() ![]() | |
These ADRIA tasks of current batch are extremely long ones. Finally, 93,370.97 seconds at my GTX 1660 Ti GPU. That is 25 hours, 56 minutes and 11 seconds. Task #32506891 Definitely, I hadn't fought tasks like this at Gpugrid before. At the GPU projects I collabarate, they are only surpassed by "Genefer Extreme" tasks at Primegrid, designed to find the largest known prime number, that take about twice as long to complete at the same graphics card. | |
ID: 56392 | Rating: 0 | rate:
![]() ![]() ![]() | |
It can be a good challenge for overclockers to show off! My teammate Ian beat that time already on two tasks. https://www.gpugrid.net/result.php?resultid=32507258 60,151.89 and https://www.gpugrid.net/result.php?resultid=32507259 62,129.12 | |
ID: 56393 | Rating: 0 | rate:
![]() ![]() ![]() | |
I picked up tasks on two machines this morning, so I'll throw my hat in the ring eventually. | |
ID: 56394 | Rating: 0 | rate:
![]() ![]() ![]() | |
I completed two tasks so far. | |
ID: 56405 | Rating: 0 | rate:
![]() ![]() ![]() | |
somehow amusing - these huge tasks now cause quite a competition between the users as to who's GPU crunches them in the shortest possible time :-))) | |
ID: 56408 | Rating: 0 | rate:
![]() ![]() ![]() | |
The best time I'm getting so far is 62,103.88 and that is on a Titan Xp. | |
ID: 56422 | Rating: 0 | rate:
![]() ![]() ![]() | |
Actually that system has a 2080, 1080Ti and two 2070's in it. | |
ID: 56423 | Rating: 0 | rate:
![]() ![]() ![]() | |
My PC system has an NVidia GeForce GTX 1050 Ti. It downloaded 2 tasks Feb 10th with an estimated Remaining task time of about 5 hours. | |
ID: 56425 | Rating: 0 | rate:
![]() ![]() ![]() | |
My PC system has an NVidia GeForce GTX 1050 Ti. It downloaded 2 tasks Feb 10th with an estimated Remaining task time of about 5 hours. if you reduce the cache size in your compute preferences, that should prevent it from downloading too many. try 1-2 days. ____________ ![]() | |
ID: 56426 | Rating: 0 | rate:
![]() ![]() ![]() | |
Running two tasks in parallel on one 1050Ti. | |
ID: 56436 | Rating: 0 | rate:
![]() ![]() ![]() | |
My 1st ADRIA_D3RBandit (batch2) finished in 34,758s (9h 39m 18s) on an RTX 2080Ti / Linux | |
ID: 56437 | Rating: 0 | rate:
![]() ![]() ![]() | |
My 1st ADRIA_D3RBandit (batch2) finished in 34,758s (9h 39m 18s) on an RTX 2080Ti / Linux Unless I missed something, that's the best time so far | |
ID: 56440 | Rating: 0 | rate:
![]() ![]() ![]() | |
Mr. Kevvy @ 34,476s, 2080ti | |
ID: 56441 | Rating: 0 | rate:
![]() ![]() ![]() | |
Thank you Jans et al. | |
ID: 56442 | Rating: 0 | rate:
![]() ![]() ![]() | |
The project folder is called www.gpugrid.net, not ps3grid. It is at: | |
ID: 56444 | Rating: 0 | rate:
![]() ![]() ![]() | |
My 1st ADRIA_D3RBandit (batch2) finished in 34,758s (9h 39m 18s) on an RTX 2080Ti / Linux Thanks, it looks like my GTX 1650s will need around 45 hrs to complete one and my GTX 1060 3GB cards will need 40. My "supercooled" GTX 750ti has a predicted completion at around 100 hrs. I aborted my spare WU from my job cache when I realized that it would expire before finishing. Many older cards won't make the 120 hr deadline and cause further delay, wasting GPU time and power. WUs this large should be given more time, IMHO. | |
ID: 56445 | Rating: 0 | rate:
![]() ![]() ![]() | |
mmonnin said: The project folder is called www.gpugrid.net, not ps3grid. It is at: The warning message has the ps3grid path for some reason. It appears when I view GPUgrid preferences. All the more reason to remove GPUGRID once that task finishes! You can just remove the project if you want instead of all of BOINC. Yes mmonnin, good catch. I will try to just remove GPUGrid first. And maybe manually remove its root directory contents before I reinstall it. Also I found some info about BOINC GPU info on the wiki here and downloaded the GPU-Z tool: https://boinc.berkeley.edu/wiki/GPU_computing | |
ID: 56449 | Rating: 0 | rate:
![]() ![]() ![]() | |
Pop Piasa said: WUs this large should be given more time, IMHO. I agree 100%! I just aborted my second task. Maybe also consider increasing the estimated time by a factor of 10. | |
ID: 56450 | Rating: 0 | rate:
![]() ![]() ![]() | |
Task #32518256 | |
ID: 56455 | Rating: 0 | rate:
![]() ![]() ![]() | |
The best time I'm getting so far is 62,103.88 and that is on a Titan Xp. Run time: 54,221s (15h 3m 41s) GPU: Titan Xp driver: 460.39 OS: Linux Ubuntu 20.04.2 LTS [5.4.0-65] CPU: Intel Core i3-4360 CPU @ 3.70GHz RAM: 2x4GB DDR3 1866MHz | |
ID: 56456 | Rating: 0 | rate:
![]() ![]() ![]() | |
34,035s, by my 2080ti. | |
ID: 56458 | Rating: 0 | rate:
![]() ![]() ![]() | |
34K seconds boundary is under intense attack. | |
ID: 56459 | Rating: 0 | rate:
![]() ![]() ![]() | |
i cranked up the power on a few cards, so I'll probably break <34k. | |
ID: 56460 | Rating: 0 | rate:
![]() ![]() ![]() | |
i did it. 33,982.33 sec | |
ID: 56464 | Rating: 0 | rate:
![]() ![]() ![]() | |
+100 | |
ID: 56469 | Rating: 0 | rate:
![]() ![]() ![]() | |
although the server status page shows more than 1.800 unsent tasks, tasks cannot be downloaded. | |
ID: 56538 | Rating: 0 | rate:
![]() ![]() ![]() | |
I'm not having any issues maintaining my 2 task quota for each of my 3 hosts. | |
ID: 56541 | Rating: 0 | rate:
![]() ![]() ![]() | |
I'm not having any issues maintaining my 2 task quota for each of my 3 hosts. When did you make the latest download? Here, I have been trying since this early morning, it still does not work. | |
ID: 56542 | Rating: 0 | rate:
![]() ![]() ![]() | |
on another PC, a task just got finished and was uploaded, and also there no new task(s) can be downloaded. | |
ID: 56543 | Rating: 0 | rate:
![]() ![]() ![]() | |
am I still the only one whose machines don't receive new tasks since this morning? | |
ID: 56556 | Rating: 0 | rate:
![]() ![]() ![]() | |
one of my hosts received a new task today Feb 16 @ 14:02 UTC. | |
ID: 56559 | Rating: 0 | rate:
![]() ![]() ![]() | |
am I still the only one whose machines don't receive new tasks since this morning?I do. Some of my hosts get served with work, some of them don't. | |
ID: 56563 | Rating: 0 | rate:
![]() ![]() ![]() | |
one of my hosts received a new task today Feb 16 @ 14:02 UTC. well, after about 90mins of asking for a new task every 100s, that system finally got one. so i guess just keep trying. ____________ ![]() | |
ID: 56565 | Rating: 0 | rate:
![]() ![]() ![]() | |
Pop Piasa was saying that FAH credits do not count on BOINC. That's always been the case. If one wants to fold into the CureCoin pool one can earn some CURE, if they still exist.... and points do not apply to BOINC credit anymore. | |
ID: 56567 | Rating: 0 | rate:
![]() ![]() ![]() | |
but my host running just a single card has not received a new task in a while. it's set to click update every 100 seconds. Ian, is there a setting for this in the boinc manager that I am not aware of or are you using a script to do this? How would this look like f.ex. say I'd like to request a task every 5 min. It has become quite cumbersome doing that manually by hitting the update button in the manager every now and then... | |
ID: 56573 | Rating: 0 | rate:
![]() ![]() ![]() | |
I'm not having any issues maintaining my 2 task quota for each of my 3 hosts. Computer: Serenity Project GPUGRID Name e2s1737_e1s150p0f146-ADRIA_D3RBandit_batch2-0-1-RND3476_1 Application New version of ACEMD 2.11 (cuda100) Workunit name e2s1737_e1s150p0f146-ADRIA_D3RBandit_batch2-0-1-RND3476 State Ready to start Received 2/16/2021 7:42:16 AM Report deadline 2/21/2021 7:42:15 AM Estimated app speed 689.93 GFLOPs/sec Estimated task size 5,000,000 GFLOPs Resources 1 CPU + 1 NVIDIA GPU CPU time at last checkpoint 00:00:00 CPU time 00:00:00 Elapsed time 00:00:00 Estimated time remaining 13:47:39 Fraction done 0.000% Virtual memory size 0.00 MB Working set size 0.00 MB Debug State: 2 - Scheduler: 0 Seems to be the latest task received | |
ID: 56574 | Rating: 0 | rate:
![]() ![]() ![]() | |
but my host running just a single card has not received a new task in a while. it's set to click update every 100 seconds. it's not even a script, just a looping CLI command utilizing the boinccmd utility. $ watch -n 100 ./boinccmd --project https://www.gpugrid.net/ update that's on Linux though. you'll have to figure out the equivalent command to replicate this behavior for your Windows host. ____________ ![]() | |
ID: 56577 | Rating: 0 | rate:
![]() ![]() ![]() | |
All right, thanks! I'll try to figure that out tonight. | |
ID: 56578 | Rating: 0 | rate:
![]() ![]() ![]() | |
it's not even a script, just a looping CLI command utilizing the boinccmd utility. That is probably why you are not getting work very often. There is a known problem on GPUGrid where you can't access their servers too often (including the website), or you don't get a response. I normally limit my machines to two at a time, at most. | |
ID: 56580 | Rating: 0 | rate:
![]() ![]() ![]() | |
it's not even a script, just a looping CLI command utilizing the boinccmd utility. you either didn't understand what I wrote or you replied to the wrong person. this command is the reason I WAS able to get a task. if BOINC gets several schedule request failures (or reports of no tasks available) and you dont have any tasks being crunched in your queue, it stops trying altogether. the only way to get it to check again is a manual update, else it sits there doing nothing. the timeout you're referring to is only about every 30 seconds. I had the system running requests in intervals more than 3x the timeout specifically to avoid that. I only ran the command until a task was downloaded, then stopped it. the single task will take ~27hrs to crunch on the 1660S so there's no need to monitor it again until tomorrow. ____________ ![]() | |
ID: 56582 | Rating: 0 | rate:
![]() ![]() ![]() | |
I understood that you did not get work very often even with that "fix". I think it makes it worse, though eventually you might get something. The less you access the servers, the better. | |
ID: 56584 | Rating: 0 | rate:
![]() ![]() ![]() | |
if you're 1 second past the timeout, or 2hrs past the timeout, it makes no difference in regards to making a successful connection. I've tested this thoroughly. this is assuming you only have a single system requesting a connection via that IP address. | |
ID: 56585 | Rating: 0 | rate:
![]() ![]() ![]() | |
Someone suggested a while ago that having more than a certain number of machines attached reduced your chances of getting work on the additional ones. In other words, it wasn't just the total requests, but where they came from. I didn't pay much attention to it. But when I have even two machines running GTX 1070's attached, I often have problems accessing the website. Maybe you can get it to work with the right timeouts. | |
ID: 56586 | Rating: 0 | rate:
![]() ![]() ![]() | |
it was me, I'm the one who mentioned that lol. but the system in question is the only one running at this IP address so it's not a factor in this particular instance. | |
ID: 56588 | Rating: 0 | rate:
![]() ![]() ![]() | |
Yesterday, I wrote: am I still the only one whose machines don't receive new tasks since this morning? during last night, I did receive new tasks :-) | |
ID: 56597 | Rating: 0 | rate:
![]() ![]() ![]() | |
I also wrote a script that tried to emulate this behavior of a forced longer timeout so you didn't need a custom client. if you search my past posts you can probably find it. it seemed to work "ok" in my tests, but YMMV. I was wondering if it was you. You have investigated it well enough, but it seems that each person may need a custom script. That will be a problem. | |
ID: 56598 | Rating: 0 | rate:
![]() ![]() ![]() | |
Yesterday, I wrote: again, no new tasks could/can be downloaded all day long :-( what's the problem ? | |
ID: 56618 | Rating: 0 | rate:
![]() ![]() ![]() | |
Yesterday, I wrote: all 4 of your systems show at least one task in progress. judging by how long these task run and the relative speed of the GPUs on your systems (the 750tis just barely make the 5-day deadline), you seem to have the appropriate amount of tasks already. Why do you think there's a problem? ____________ ![]() | |
ID: 56619 | Rating: 0 | rate:
![]() ![]() ![]() | |
Why do you think there's a problem? simply because in all the years before (I've been crunching for 6 years), each GPU could download 2 tasks, regardless of the crunching time per task. So something seems different now. Further, as you correctly stated, the 750tis barely make the 5-days-deadline, and I guess that I am not the only one with such a card. So I don't understand why this deadline is not being extended once such huge long-runners are being distributed. Back to the download problem: of course, it would not make sense anyway to download 2 new tasks at a time with the 750tis; but for the 970 and the 980ti, it would make sense (particularly in times where once and so often there are no new tasks available, like exactly right now). | |
ID: 56623 | Rating: 0 | rate:
![]() ![]() ![]() | |
BOINC keeps track of the total time it will take to compute the jobs in your cache, and compares that to your cache size that you have specified in the compute settings. | |
ID: 56624 | Rating: 0 | rate:
![]() ![]() ![]() | |
additionally, the outstanding resends seem to have dried up for the moment, so it'll probably be a little hard to get tasks for the next few days. | |
ID: 56625 | Rating: 0 | rate:
![]() ![]() ![]() | |
the only thing that changed was the task size. and we've never had tasks that run this long before, so it might have not been long enough for BOINC's mechanisms to come into play. yes, maybe so as far as deadlines go. The goal of this, and any, project is scientific results. and since they are running the project, they are the ones that set the requirements for how long they want to wait for results. if they've determined that they want/need results in X number of days, that's their right, it's their project. I agree. However, I don't think the 5 days deadline per task is absolutely mandatory for the result of the project. The situation rather is that this time span was set long time ago, and ever since the tasks were lasting from a few hours to maximum 1 or 2 days, even with older cards (at least that's what I remember from the past 6 years I've been on board). And now, with these unusual long-runners, probably no one at GPUGRID has taken into account that there are still many users with old cards which will barely or not at all be able to finish these tasks within 5 days. In other words, I do not think (or hope) that their intention is to preclude that many users with older cards from crunching their tasks. | |
ID: 56627 | Rating: 0 | rate:
![]() ![]() ![]() | |
And now, with these unusual long-runners, probably no one at GPUGRID has taken into account that there are still many users with old cards which will barely or not at all be able to finish these tasks within 5 days. But they certainly are aware of the issue. See Toni's post from 4 days ago where he explicitly mentions that the tasks are not meant for low powered cards. https://www.gpugrid.net/forum_thread.php?id=5217&nowrap=true#56504
My emphasis. | |
ID: 56629 | Rating: 0 | rate:
![]() ![]() ![]() | |
And now, with these unusual long-runners, probably no one at GPUGRID has taken into account that there are still many users with old cards which will barely or not at all be able to finish these tasks within 5 days. so hopefully this does not mean that if this new experiment works out well, in the future there will mainly come such tasks which require powerful cards. On the other hand, if this is what will happen, there is not much the crunchers with less powerful cards can do about :-( | |
ID: 56630 | Rating: 0 | rate:
![]() ![]() ![]() | |
you could upgrade. even though the used market is a little shaken up right now, you can easily get a more capable GPU than a 750ti for not too much money. something like GTX 960/970/980 are still cheap. | |
ID: 56631 | Rating: 0 | rate:
![]() ![]() ![]() | |
At least, provided that these kind of tasks were to become more common or even the standard workload in the foreseeable future, I don't see a reason why they couldn't adapt their 24 hrs return bonus slightly upwards to not penalize still relatively new (arguably less powerful and thus less important contributions) hardware that miss the deadline only by a small margin and are just shy of the 24hrs mark. And as seen before in numerous performance benchmarks here, there is still a large user base with 10series or 16 series cards that will inevitably miss this 24 hrs mark. Why not take that change in workload as an opportunity to change the credit calculation... That is to say, if you were to continue placing similar workloads in the foreseeable future. Increasing this mark by just 1-2 hrs, might give 2 yr old cards (16series) still a chance to make it in time for the bonus. | |
ID: 56632 | Rating: 0 | rate:
![]() ![]() ![]() | |
you could upgrade. even though the used market is a little shaken up right now, you can easily get a more capable GPU than a 750ti for not too much money. something like GTX 960/970/980 are still cheap. you are perfectly right! My problem is not having no money to get a newer/better GPUs, but rather that the two boxes in which the two 750ti are crunching, are rather small ("mini-tower"). I was lucky to manage accomodating the 750tis a few years ago. The boxes still work well (in on of them I upgraded the CPU from a 2-core to a 4-core, on the other one I upgraded the RAM), so I simply do not want to get rid of these PCs, if not absolutely necessary. Once I have to replace them for any reason, I will of course get PCs with new and strong GPUs. Until then, I was hoping to contribute to GPUGRID with my current 750tis. | |
ID: 56633 | Rating: 0 | rate:
![]() ![]() ![]() | |
today, on one of my 750ti, a task got finished after 430.782 seconds and was classified as "too late", no credits. | |
ID: 56649 | Rating: 0 | rate:
![]() ![]() ![]() | |
http://www.gpugrid.net/workunit.php?wuid=27021719 | |
ID: 56650 | Rating: 0 | rate:
![]() ![]() ![]() | |
Anyone else getting the impression that the latest batch of tasks, released in the last day or two, are running significantly slower than their predecessors? | |
ID: 56651 | Rating: 0 | rate:
![]() ![]() ![]() | |
everything seems normal here. everything I have in progress looks in line with historical data for those systems. | |
ID: 56652 | Rating: 0 | rate:
![]() ![]() ![]() | |
Thanks. I'll keep an eye on them, and see what happens when they report. No change on the running conditions here: the only unusual thing is that I ran some of the WCG covid betas on the GPUs yesterday. It would be a big snafu if they had left the cards in an dodgy state after running, but I don't think it's likely. | |
ID: 56653 | Rating: 0 | rate:
![]() ![]() ![]() | |
http://www.gpugrid.net/workunit.php?wuid=27021719 thank for the explanations. So it seems to be kind of lottery whether a task crunched by a GTX750ti succeeds or fails after 5 days. Really a waste of CPU and GPU power and time :-( Even more, my decision to change to Folding@home was a good one. | |
ID: 56654 | Rating: 0 | rate:
![]() ![]() ![]() | |
So it seems to be kind of lottery whether a task crunched by a GTX750ti succeeds or fails after 5 days. Really a waste of CPU and GPU power and time :-( I have my GTX 750ti overclocked to 1366 MHz and can finish within the time constraints if it crunches 24/7. However, the base credit puts it well below the 100K/day it earned doing Toni's methods development MDAD project. Of course it's about bonus for quick turnaround. However, Toni used the same window of expiration for WUs which were miniscule by comparison. There is no adjustment here for the relative size of the jobs. I sense that this project is discouraging participation by the "small timers" with older or budget cards. It is IMO more fair to grant credit solely on FLOPS completed, but turnaround time is a major factor in distributed supercomputing. That makes the fastest hosts the most desirable. Extra credit keeps them hooked. Although I am presently committing 2 of my hosts to GPUG, I have my GTX 750ti crunching FAHcore tasks and gaining around 175K points there each day. | |
ID: 56655 | Rating: 0 | rate:
![]() ![]() ![]() | |
It's said that a stopped clock shows the exact time twice a day... | |
ID: 57210 | Rating: 0 | rate:
![]() ![]() ![]() | |
There was a small batch of tasks yesterday, but they disappeared very quickly: mainly because most users returned them with an error after about three seconds. WU 27077711 was created at 18:33, and ran through the maximum error count in half an hour. | |
ID: 57211 | Rating: 0 | rate:
![]() ![]() ![]() | |
There was a small batch of tasks yesterday, but they disappeared very quickly: mainly because most users returned them with an error after about three seconds. WU 27077711 was created at 18:33, and ran through the maximum error count in half an hour. One of my hosts catched also two of these shorter CRYPTICSCOUT_pocket_discovery tasks. Since I have applied the same WORKAROUND to every of my hosts, both tasks were finished successfully: 0_1-CRYPTICSCOUT_pocket_discovery_9f6d15b7_c2ff_4a6e_a594_7cc0797a7231-0-1-RND3838_0 0_0-CRYPTICSCOUT_pocket_discovery_9f6d15b7_c2ff_4a6e_a594_7cc0797a7231-0-1-RND1209_0 Additionally, the same host had previously catched one of the very few remaining ADRIA_New_KIXcMyb_HIP_AdaptiveBandit long tasks. I'm estimating it to finish on next Tuesday July 20th, at about 19:50 UTC. | |
ID: 57212 | Rating: 0 | rate:
![]() ![]() ![]() | |
Additionally, the same host had previously catched one of the very few remaining ADRIA_New_KIXcMyb_HIP_AdaptiveBandit long tasks. Finally, task e3s455_e1s595p0f274-ADRIA_New_KIXcMyb_HIP_AdaptiveBandit-1-2-RND9459_1 come out today at exactly 19:52:25 UTC. It took 279050 seconds of total processing time. That is: 3 days, 5 hours, 30 minutes and 50 seconds in a non-stop journey. These ADRIA long tasks pose a hard birth for a GTX 1650 GPU... Currently, #Tasks in progress is reduced to 4. | |
ID: 57220 | Rating: 0 | rate:
![]() ![]() ![]() | |
Down with only 2 now but still 20 Anaconda. | |
ID: 57221 | Rating: 0 | rate:
![]() ![]() ![]() | |
Message boards : Server and website : Down to 5 tasks in progress