Advanced search

Message boards : Number crunching : New simulations soon

Author Message
Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 46541 - Posted: 23 Feb 2017 | 11:06:44 UTC

I know we have been a bit down on simulations lately. Right now we are setting up multiple systems with Adria though so we should be sending out new WU either by the end of the week or by beginning of next week.

It simply takes some time to find proteins we think are interesting to simulate, prepare them, write the software necessary for doing adaptive sampling on them since we changed a bit the method etc. We will take a final look at it today with Gianni.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,672,242,755
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 46542 - Posted: 23 Feb 2017 | 11:37:44 UTC

Thank you for keeping us updated Stefan

Rion Family
Send message
Joined: 13 Jan 14
Posts: 21
Credit: 15,415,926,517
RAC: 12,493
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46551 - Posted: 23 Feb 2017 | 23:57:01 UTC

Yes, Thank you for the update it is appreciated! We understand the complexity of your work and look forward to helping you succeed.



Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 46564 - Posted: 27 Feb 2017 | 10:18:01 UTC
Last modified: 27 Feb 2017 | 10:18:54 UTC

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 485
Credit: 10,502,298,466
RAC: 15,195,281
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46565 - Posted: 28 Feb 2017 | 0:10:24 UTC - in response to Message 46564.
Last modified: 28 Feb 2017 | 0:11:21 UTC

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)


Good, take your time and get it right. That's more important than haste, and see if you can give these WUs a high GPU usage. as well. I wouldn't mind if you throw in some ultra long WUs, provided you put them in a separate category.

So how many WUs will there be in these simulations, approximately?

will
Send message
Joined: 17 Jan 17
Posts: 6
Credit: 8,770,500
RAC: 0
Level
Ser
Scientific publications
wat
Message 46566 - Posted: 28 Feb 2017 | 0:11:51 UTC - in response to Message 46564.

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)


Update is most appreciated! In the meantime I've got my Titan XP grinding away at a Genfer n=22 prime on PrimeGrid, going for that world record ;)

GDB
Send message
Joined: 24 Oct 11
Posts: 4
Credit: 357,223,487
RAC: 265,881
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46575 - Posted: 2 Mar 2017 | 11:55:46 UTC - in response to Message 46566.



Update is most appreciated! In the meantime I've got my Titan XP grinding away at a Genfer n=22 prime on PrimeGrid, going for that world record ;)


A Genefer n=22 prime found now would only be 2nd largest, not world record!

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 240
Credit: 64,069,811
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 46576 - Posted: 3 Mar 2017 | 0:52:45 UTC - in response to Message 46564.

Haha I think I should have put soon(tm).

No, we are still actively working on getting the simulations ready. Requires some communications with other groups as well though. It's really not as easy as it might sound :)



Hi Stefan!

Once you are done working on the simulations, will there be tasks available at all times for a bit, or are they only going to be in phases at a time?

I'm just wondering so I can plan ahead. Thank you :)
____________
Cruncher/Learner in progress.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,672,242,755
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 46577 - Posted: 3 Mar 2017 | 13:49:04 UTC

It seems that we are finally saturating the hosts with work! I have paused all folding to continue with GPUGrid now that my GPUs can do something.

will
Send message
Joined: 17 Jan 17
Posts: 6
Credit: 8,770,500
RAC: 0
Level
Ser
Scientific publications
wat
Message 46578 - Posted: 3 Mar 2017 | 14:03:01 UTC
Last modified: 3 Mar 2017 | 14:06:04 UTC

GPU is humming! looking at ~70% utilization on a Titan X Pascal, estimated completion time 1 hr 7 mins.

I do however find it interesting that the TDP draw is only hovering around 40%...

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 46583 - Posted: 3 Mar 2017 | 21:00:03 UTC - in response to Message 46578.
Last modified: 3 Mar 2017 | 21:01:00 UTC

GPU is humming! looking at ~70% utilization on a Titan X Pascal, estimated completion time 1 hr 7 mins.

I do however find it interesting that the TDP draw is only hovering around 40%...


I would run 2 concurrent jobs on this GPU - as the utilization of 70% is very low, even by Win10 standards. Your CPU is a good one, but the single core performance not high enough to feed that Pascal monster entirely. 2 Jobs/GPU should yield >90% load right away. Despite the nasty WDDM handbrake.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 240
Credit: 64,069,811
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 46584 - Posted: 4 Mar 2017 | 4:50:38 UTC - in response to Message 46583.

How can I make my boinc search for new tasks on gpugrid multiple times in one minute?

Regardless how long I leave my pc on, it just doesn't get any tasks.
____________
Cruncher/Learner in progress.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46586 - Posted: 4 Mar 2017 | 5:40:41 UTC - in response to Message 46584.
Last modified: 4 Mar 2017 | 5:43:46 UTC

Logan, I know the answer to your question, but I refuse to answer it, because it sounds like you're basically wanting to hammer the server with that knowledge. The whole reason BOINC backoffs exist, is to prevent DDOS-style hammering.

How about you set yourself up with some 0-resource-share backup GPU projects, and just let BOINC get work from GPUGrid when it can. It was designed so you don't have to babysit it.

Now, if you believe you are having a legitimate problem, where the server has tasks available, yet your work fetch got 0 tasks, then please open that discussion in a new thread, and I'd be more than happy to help you with it. I helped David design the current work fetch algorithms, and they work pretty well, but are not perfect.

If you feel like investigating solo, then use Options -> Event Log Options -> work_fetch_debug, and then look at Tools -> Event Log. Lots of fun info in there, if you're willing to learn.

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 240
Credit: 64,069,811
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 46587 - Posted: 4 Mar 2017 | 5:55:44 UTC - in response to Message 46586.

Logan, I know the answer to your question, but I refuse to answer it, because it sounds like you're basically wanting to hammer the server with that knowledge. The whole reason BOINC backoffs exist, is to prevent DDOS-style hammering.

How about you set yourself up with some 0-resource-share backup GPU projects, and just let BOINC get work from GPUGrid when it can. It was designed so you don't have to babysit it.

Now, if you believe you are having a legitimate problem, where the server has tasks available, yet your work fetch got 0 tasks, then please open that discussion in a new thread, and I'd be more than happy to help you with it. I helped David design the current work fetch algorithms, and they work pretty well, but are not perfect.

If you feel like investigating solo, then use Options -> Event Log Options -> work_fetch_debug, and then look at Tools -> Event Log. Lots of fun info in there, if you're willing to learn.



Hi,

I have no interest in d-dosing anything and nor do I care to even learn how to. This was a serious problem. But it's ok, I changed a couple of settings on my own and I got a single task now which is all I wanted.

And my apologies, I will make a new thread next time. I should've done that but honestly have a bad habit of asking in other threads.

Thank you

-Logan

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 46588 - Posted: 4 Mar 2017 | 6:03:02 UTC - in response to Message 46587.

I pretty much already knew that you didn't intentionally want to burden the server - most people here are friendly :) But, in reality, if you bypass the BOINC backoffs, that is exactly what you are doing - burdening the server unnecessarily.

That being said.... I have recently employed a method to change the backoff maximum limit to be 1 hour instead of 1 day, for one of my projects.. which is why I said "I know the answer". If you follow me on other projects, you'll see why I did what I did. Let's just say, I have the capability to run a task for 500 days without it crashing :)

That project isn't GPUGrid. And GPUGrid wouldn't benefit at all, from us trying to get tasks at a rate that is faster than standard BOINC backoffs.

I'm glad to see "New simulations soon" in this thread's title, and I'm glad we're seeing SOME new work, but I too have GPUs (5 of them, in 2 PCs!) that sometimes don't get GPUGrid work... and when that happens, they get single one-off tasks from the 0-resource-share backup projects -- Seti, Einstein, Asteroids, etc. I totally recommend you set those up, to keep your GPUs from going idle! :)

Profile Logan Carr
Send message
Joined: 12 Aug 15
Posts: 240
Credit: 64,069,811
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 46589 - Posted: 4 Mar 2017 | 6:18:21 UTC - in response to Message 46588.

Okay, I will add prime grid as a back-up project and have it's resource share set to 0. Thank you and I'm glad we reached an understanding of the situation.
____________
Cruncher/Learner in progress.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 511
Credit: 4,672,242,755
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 46613 - Posted: 9 Mar 2017 | 14:36:54 UTC

Very high GPU utilization on these new ADRIA WUs on windows 10, very impressed, keep up the good work!

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 46614 - Posted: 9 Mar 2017 | 14:42:25 UTC
Last modified: 9 Mar 2017 | 14:42:36 UTC

And we are on :D Finally a nice big batch of simulations, hehe. They will decrease a bit over the next days but I think it should be a relatively stable supply now that we got the adaptive going on those proteins.
Enjoy!

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 46617 - Posted: 9 Mar 2017 | 15:49:27 UTC

Nice! If it goes on like this I really would have to rob my piggy bank and buy the new 1080ti...
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Post to thread

Message boards : Number crunching : New simulations soon

//