Advanced search

Message boards : Server and website : Task queuing plans for the summer

Author Message
Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43257 - Posted: 26 Apr 2016 | 23:51:26 UTC

Hello Gerard,

As it seems that you are running GPUGrid recently, I would like to hear about your plans for the summer.
I think the other crunchers are also interested.
If you go out for more than a day, the queues will run dry here, my fellow GPUGrid crunchers will go mad, the staff at Einstein@home & SETI@home & POEM will be very happy for the extra crunching capacity they will have, but unfortunately their servers will run dry too as a consequence.
So it seems that if you leave the campus for the summer break, the whole world (of BOINC) will collapse.
It would be very unwise from your part to start such a chain reaction by having a long (two days) summer vacation.
Taking all of the above into consideration, what are your plans for the summer? :)

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 467
Credit: 8,194,796,966
RAC: 10,448,358
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43258 - Posted: 27 Apr 2016 | 2:20:41 UTC - in response to Message 43257.

Is it really possible that if GPUGrid WUs dry up, and the several hundred crunchers go to these other projects that we could dry them out as well, in a matter of days?







Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43260 - Posted: 27 Apr 2016 | 8:22:29 UTC - in response to Message 43258.

Is it really possible that if GPUGrid WUs dry up, and the several hundred crunchers go to these other projects that we could dry them out as well, in a matter of days?
I don't think so. But even if it could happen (they will have a summer vacation too), the world of BOINC won't collapse.

Jozef J
Send message
Joined: 7 Jun 12
Posts: 112
Credit: 1,118,845,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 43263 - Posted: 27 Apr 2016 | 20:44:54 UTC

It's just anything fun or really can such a situation occur??
Lucky, im get my 1 bilion before :-)))

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 467
Credit: 8,194,796,966
RAC: 10,448,358
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43264 - Posted: 27 Apr 2016 | 21:46:54 UTC - in response to Message 43260.

Is it really possible that if GPUGrid WUs dry up, and the several hundred crunchers go to these other projects that we could dry them out as well, in a matter of days?
I don't think so. But even if it could happen (they will have a summer vacation too), the world of BOINC won't collapse.


That's good. Now, I can sleep at night, knowing the world isn't going end this summer!!



Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43284 - Posted: 30 Apr 2016 | 13:51:41 UTC

An announcement was made here, indicating that tasks will ramp up mid-summer:
https://www.gpugrid.net/forum_thread.php?id=4297

MrJo
Send message
Joined: 18 Apr 14
Posts: 43
Credit: 1,192,135,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwat
Message 43288 - Posted: 2 May 2016 | 9:57:59 UTC

Sad record: From six of my computer working for GPUGrid, no one gets some WU's. It seems as if it no longer makes sense to provide computing power for GPUGrid.


____________
Regards, Josef

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43289 - Posted: 2 May 2016 | 12:22:03 UTC - in response to Message 43288.
Last modified: 2 May 2016 | 12:22:42 UTC

Sad record: From six of my computer working for GPUGrid, no one gets some WU's. It seems as if it no longer makes sense to provide computing power for GPUGrid.


Actually, I love this project, and continue to have all my PCs ready to do GPUGrid work when it is available! And I've attached to other GPU projects too, so that the GPUs can stay busy during times when GPUGrid is out of work. Me, personally, I've got my GPUGrid Resource Share set to 99999, so my PCs ask GPUGrid for GPU work first, with ~6 other GPU projects attached at Resource Share 100.

I'd suggest that you try to stop being sad / down / negative! Instead, additionally attach to some other GPU projects, and find enjoyment!

MrJo
Send message
Joined: 18 Apr 14
Posts: 43
Credit: 1,192,135,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwat
Message 43309 - Posted: 3 May 2016 | 22:30:07 UTC - in response to Message 43289.

..and continue to have all my PCs ready to do GPUGrid work when it is available! And I've attached to other GPU projects too, so that the GPUs can stay busy during times when GPUGrid is out of work.

Did it also. Works fine so far..

____________
Regards, Josef

jjch
Send message
Joined: 10 Nov 13
Posts: 98
Credit: 15,288,150,388
RAC: 1,732,962
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43315 - Posted: 5 May 2016 | 21:17:08 UTC

@Jacob Klein

From what I understand on the BAM My projects page the Maximum resource share is 10000.

    Maximum resource share is 10000. Anything higher will be reduced to 10000.



If you set it to 99999 the percentage will still be calculated using the 10000 value.

I currently have my Primary projects set at 10000. Secondary projects set at 25 and Tertiary projects set at 0.

Project name Category Last attempt Resource
share Attach new host by default? Options Show hosts Stats
Enigma@Home Mathematics Success 0 ✓
GPUGRID Biology Success 10000 ✓
POEM@HOME Biology Success 25 ✓
Rosetta@Home Biology Success 10000 ✓
SETI@Home Astrophysics Success 25 ✓
World Community Grid Umbrella project No response from project 0 ✓

Right now I have both CPU/GPU work set for Poem so it is keeping my systems pretty well loaded when there isn't GPUGRID or Rosetta work available. BOINC should still should give those projects priority when more work comes in.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43316 - Posted: 5 May 2016 | 22:44:23 UTC

BAM takes 99999, and gives 99999 to the client, and the client uses 99999.

My setup is:

GPUGrid: 99999
RNA World: 99999
World Community Grid: 400
~20 Projects I don't mind doing work for: 100 (default)
~20 Projects I only want work from as a last resort: 0
~10 Projects I don't want work from ever: No New Tasks

Fun!

jjch
Send message
Joined: 10 Nov 13
Posts: 98
Credit: 15,288,150,388
RAC: 1,732,962
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43317 - Posted: 6 May 2016 | 1:28:51 UTC

Well ok then. Apparently that does actually work. Good to know.

I previously had GPUGRID and rosetta set at 10,000 which was 48.54% each.
When I changed GPUGRID to 99,999 it bumped up to 90.42% and rosetta at 9.04.
The balance of the percentage is left for the remaining projects.

Still I believe it's a ratio of the projects to each other. Scale of 0-100, 0-10000 or 0-99999. As long as the percentages are where you want them and they work for you your good to go.

It seems that if you set a project to Won't get new tasks it essentially removes it from the percentage calculation so what's indicated may not be exactly what you are getting.

Also from what I have seen if you allow a project to get new tasks again it will be behind so BOINC will give it priority and spend some extra time catching up.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 43318 - Posted: 6 May 2016 | 2:14:07 UTC - in response to Message 43317.

For a given client, it maintains a Recent Estimated Credit (REC) for each project, and calculates a "fetch priority" based on REC and queued work.

You can actually see the REC and priority values pretty easily, if you turn on the work_fetch_debug option, in Options -> Event Log Options, or in your cc_config.xml file (instructions here: https://boinc.berkeley.edu/wiki/Client_configuration)

For more info about the REC and resource share work fetch decisions, you can read up on these dated but useful articles.
http://boinc.berkeley.edu/trac/wiki/ClientSched
http://boinc.berkeley.edu/trac/wiki/ClientSchedOctTen

I know a lot about this, because I helped David A. troubleshoot work fetch for a couple months at one point - it was awesome, and it really helped BOINC to make smart/correct decisions about which projects to ask work from. There's a reason I'm attached to 60 of them! :)

If you have any other questions about this, feel free to PM me. In the meantime, let's let this thread get back on topic, lol.

Gerard
Send message
Joined: 26 Mar 14
Posts: 101
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 43335 - Posted: 9 May 2016 | 10:03:32 UTC - in response to Message 43257.
Last modified: 9 May 2016 | 10:04:12 UTC

Hi Retvari and gpugrid crunchers!

Is good that you ask this question, makes us think about it too. :) As said, a new student should be on board by June. He will carry on with the good old Nate work (disordered proteins) so we should have some WU available from his side.

On the other side, we may start a new european project involving another chemokine or a some testing of Umbrella protocols that should take another good computing piece of cake.

By the way, Stefan and I just fulfilled the queues and would be very happy to get the computing power of the last weeks before the downfall. In my case, I'm just simulating some last ligands I need for the paper I am preparing :)

Yours,

Gerard M.

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 467
Credit: 8,194,796,966
RAC: 10,448,358
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 44094 - Posted: 6 Aug 2016 | 5:01:56 UTC - in response to Message 43335.

Since we are running low on tasks, would you mind giving us an update on what you are planning to do next, as far as new simulations and tasks, and when can we expect them?





Post to thread

Message boards : Server and website : Task queuing plans for the summer

//