Advanced search

Message boards : News : WU: BARNA

Author Message
Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 36142 - Posted: 7 Apr 2014 | 16:10:30 UTC
Last modified: 7 Apr 2014 | 16:16:14 UTC

Hey everyone,
I am sending out some WUs to the long queue called BARNA (pun intended). The system we are investigating is Barnase/Barstar which are two proteins that interact with each other.
http://en.wikipedia.org/wiki/Barnase
http://en.wikipedia.org/wiki/Barstar

This will be (as far as I know) our first protein-protein interaction study and we are hoping to study some interactions like the ones in the crystallographic model and further develop the corresponding analysis tools.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36149 - Posted: 8 Apr 2014 | 2:12:53 UTC

One completed so far. Very nice GPU utilization on GTX 780Ti w/ Win7 at ~85% with CPU crunching at 75%.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36151 - Posted: 8 Apr 2014 | 3:39:42 UTC - in response to Message 36149.
Last modified: 8 Apr 2014 | 3:41:19 UTC

I just recently created an application for my Dell XPS 730x computer, which is capable of controlling the XPS System Fan, ramping fan up and throttling fan down, all based on the GPU Temp of my GTX 660 Ti.

And then I picked up one of these BARNA tasks.
WOW!

It is pushing my GTX 660 Ti very hard, at 100% Power consumption, at 80% (full) GPU Fan, and at 92% GPU Usage even with a full load of CPU tasks also running.

This is all very good, of course. I don't think I've ever seen my GPU work harder on any other task type. And, for reference, I have a closed hot system, and this task type triggered my program to keep my system fan at full speed :)

Thanks for helping me test my program.

Regards,
Jacob

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36152 - Posted: 8 Apr 2014 | 10:19:42 UTC

Currently chewing one too, very nice utilization:

vagelis@vgserver:~$ gpuinfo Fan Speed : 47 % Gpu : 59 C FB Memory Usage Total : 1023 MiB Used : 868 MiB Free : 155 MiB

____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36154 - Posted: 8 Apr 2014 | 11:48:23 UTC

My 780Ti does very good with these as well, 88-90% GPU load and using 1150MB of memory. Temperature is a bit higher, now 76-77°, is with other WU's 72-74°. But ambient temperature is 29°C.
Still using the "old" driver 331.82.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36159 - Posted: 8 Apr 2014 | 13:29:09 UTC - in response to Message 36154.

Stefan, excellent research choice!

I concur that these WU's use plenty of GDDR, use a bit more power, utilize the GPU very well (91% for me), result in slightly higher temps and so far appear to be stable; my GTX770 is running true at 1267MHz. Using 335.23.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jozef J
Send message
Joined: 7 Jun 12
Posts: 112
Credit: 1,118,845,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 36168 - Posted: 8 Apr 2014 | 20:19:24 UTC

Very good job and idea.
Gpu-load 86-87%
Cpu - 0.812 1 core
Memory used about -2660 mb,each gf card. That's great if beginning to use Tasks all possible memory to the graphics card .. I has a free six gigabytes respectively .. 4 gigabytes real ))


Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 36826 - Posted: 13 May 2014 | 9:20:43 UTC

From the previous simulations we found out that we need to produce longer simulations to analyze their interactions well, so I am submitting some more of these workunits. Should fill up the queue for a while :)

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36836 - Posted: 14 May 2014 | 23:16:40 UTC
Last modified: 14 May 2014 | 23:20:16 UTC

Thank you for the follow up - we appreciate seeing the continuing progress and evolution of this experiment.

Longer tasks are typically more challenging, in a good way, as we need to keep our rigs stable and running smooth for longer stretches. A full queue always keeps us grunts happy.

All put together - well done!
____________
Thanks - Steve

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36837 - Posted: 15 May 2014 | 9:06:30 UTC

These longer BARNA's run smooth on my GTX780Ti. Steady 88% GPU load, using ~1100MB RAM and boost the card at steady 73°C (thanks to Jeremy Z). Completed in around 5 and a half hours in Win7 with "old" 331.82 driver.
By comparison: 7.8 hours on my GTX770. I like these WU's Stefan.
____________
Greetings from TJ

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 36839 - Posted: 15 May 2014 | 10:04:19 UTC - in response to Message 36837.

Thanks :) Sorry I didn't make it clear, but the single WUs steps that you calculate are actually the same length as the older ones. They are just configured to run more steps, meaning that they will be longer in the end. So in theory from your point of view they should take roughly the same time as before (plus minus half an hour?).

Have fun!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36842 - Posted: 16 May 2014 | 7:54:55 UTC - in response to Message 36837.

These longer BARNA's run smooth on my GTX780Ti. Steady 88% GPU load, using ~1100MB RAM and boost the card at steady 73°C (thanks to Jeremy Z). Completed in around 5 and a half hours in Win7 with "old" 331.82 driver.
By comparison: 7.8 hours on my GTX770. I like these WU's Stefan.


That's quick.
It's looking like about 8h 45min on my GTX770 (W7x64 1163MHz Boost) 337.50.
Probably down to not using SWAN_SYNC, running 7 CPU tasks, using slow DDR and being on a PCIE2 bus (older controller). GPU usage is a bit spiky. When the WU completes I'll enable SWAN_SYNC, reduce the CPU usage and reboot, to see how the next one fares.

Stefan,
I take it that you reduced the detail/accuracy making each step quicker?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 36844 - Posted: 16 May 2014 | 9:15:14 UTC - in response to Message 36842.
Last modified: 16 May 2014 | 9:16:52 UTC

No, simulations should be same accuracy. What I call "steps" or sometime "chain-steps" are consecutive pieces of a WU. This means that you get sent a WU, finish it, and it is sent to another user to continue from your endpoint. If you stick all these steps together you have one very long simulation. So I just told GPUGRID to continue sending out the ends of your simulations to other users for quite some while so that I get very long simulations.

Although 5.5 hours does indeed sound too short... I will take a look.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36847 - Posted: 16 May 2014 | 10:11:01 UTC - in response to Message 36842.

My 770 is running at 1149.7MHz (GPU-Z), 93% load. But as I still us the older 331.82 driver, I can not use SWAN_SYNC. The CPU is a Haswell from last year, the i7-4770K but I am still using old RAM 799.6MHz, 8GB in total. The CPU is only doing four Rosetta WU's, and two for GPUGRID, as I have also a 780Ti in this PC. Both using 0.749CPU's. So indeed skgiven that helps a bit in speeding things up (less CPU use, we have both experimented with that). But I guess its the driver. With the latest drivers, the 780Ti is hampered, reported by several other crunchers, and I saw that too when installed 337.50 beta and reverted back two days later.
I don't know if that has the same effect on the 770 as I did not look at that, wrong off course, not how it should be with research, but I was only focused on the 780Ti.
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36850 - Posted: 16 May 2014 | 10:16:49 UTC - in response to Message 36844.

Aha, now I understand, thanks for the explanation Stefan.

Although 5.5 hours does indeed sound too short... I will take a look.

Or perhaps you did find a way to get more out of the 780Ti, the "wonder card" that runs not optimal under Windows later then XP...

____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36851 - Posted: 16 May 2014 | 11:21:25 UTC - in response to Message 36850.
Last modified: 16 May 2014 | 11:34:03 UTC

Thanks for the explanation Stefan,

Don't look too hard at the GTX780Ti. Its 45% faster than a GTX770, and the rest of the difference (45min) is just down to my set-up (and it's a new set-up); I moved the card from another system as it was rattling. I was also running 3 climate models on that system (as well as 4 other CPU task). The climate models are greedy and don't scale well past using 4 threads.
I will need to wait for an A2ART4E_adaptive2 WU to finish, then I might see another BARNA and be able to tell if my changes make much difference (I expect they will though).

The GTX780Ti is a GK110 card, whereas the GTX770 is GK104. So that might explain the difference in performance when comparing old drivers against new drivers. It's also worth noting that if you use the new driver and SWAN_SYNC you would need to configure Boinc to use 1 less CPU thread; although using SWAN_SYNC forces the app to have a full CPU thread allocated to it, Boinc still goes by 0.799CPU's and because this is <1 will try to run another CPU app. So, if you are using a 335 or later driver and you set SWAN_SYNC (and reboot) Boinc apps will use 1 more CPU thread.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36860 - Posted: 17 May 2014 | 16:43:47 UTC - in response to Message 36851.

As expected I saw an improvement on runtime from using the better settings:

7h 26min rather than 8h 45min


____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36862 - Posted: 17 May 2014 | 21:35:43 UTC - in response to Message 36860.

As expected I saw an improvement on runtime from using the better settings:

7h 26min rather than 8h 45min


Good to read that it worked out nicely for your 770.
____________
Greetings from TJ

Dayle Diamond
Send message
Joined: 5 Dec 12
Posts: 84
Credit: 1,629,213,415
RAC: 672,941
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 39625 - Posted: 23 Jan 2015 | 14:46:04 UTC

The BARNA projects running on Dec. 2014 were at like 60/100 steps.
Then they sort of went away, or got de-prioritized?

I never see them anymore.
Did you get enough data, or we just giving the rest of the lab a turn?

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 39676 - Posted: 24 Jan 2015 | 13:30:18 UTC - in response to Message 39625.

They are finished. We simulated more than enough and now it's time to analyse them with some collaborators. Thank you very much for crunching them :)

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 45852 - Posted: 23 Dec 2016 | 5:28:04 UTC - in response to Message 39676.

They are finished. We simulated more than enough and now it's time to analyse them with some collaborators. Thank you very much for crunching them :)

On the project status page, I just noticed though that the number of unsent SDOERR_BNB tasks is growing.
How come?

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,601,711,851
RAC: 8,774,809
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45853 - Posted: 23 Dec 2016 | 9:47:11 UTC - in response to Message 45852.

They are finished. We simulated more than enough and now it's time to analyse them with some collaborators. Thank you very much for crunching them :)

On the project status page, I just noticed though that the number of unsent SDOERR_BNB tasks is growing.
How come?

That would be explained in the BNBS thread.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 45854 - Posted: 23 Dec 2016 | 11:23:10 UTC - in response to Message 45853.

That would be explained in the BNBS thread.

I guess you talks about this posting:

I can't get new tasks on my Linux host (with a GTX 1080), while there's a plenty of workunits in the queue.
It seems that the scheduler is lying about the number of available workunits in the long queue:

2016. dec. 22., csütörtök, 09:50:10 CET | GPUGRID | Sending scheduler request: Requested by project. 2016. dec. 22., csütörtök, 09:50:10 CET | GPUGRID | Requesting new tasks for CPU and NVIDIA GPU 2016. dec. 22., csütörtök, 09:50:12 CET | GPUGRID | Scheduler request completed: got 0 new tasks 2016. dec. 22., csütörtök, 09:50:12 CET | GPUGRID | No tasks sent 2016. dec. 22., csütörtök, 09:50:12 CET | GPUGRID | No tasks are available for Long runs (8-12 hours on fastest card)

("csütörtök"= thursday)


The big difference to my current observation though is that all my hosts do in fact still download BNBS tasks. So, this time, they are not just shown on the project status page, they really exist and are available for download.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,601,711,851
RAC: 8,774,809
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45860 - Posted: 23 Dec 2016 | 14:12:38 UTC - in response to Message 45854.

That would be explained in the BNBS thread.

I guess you talks about this posting:

No, I was talking about the precise message 45808 that my link pointed to:

In case anyone is wondering what these WUs are, we are running some extra simulations on the Barnase Barstar system (previously called BARNA: http://gpugrid.net/forum_thread.php?id=3709#36142 ) to answer some questions of the reviewers. If the results get us through the review process this will be a major publication :)

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45861 - Posted: 23 Dec 2016 | 14:30:12 UTC
Last modified: 23 Dec 2016 | 14:33:09 UTC

At first view it seems that Pascal cards may benefit from this kind of job. My GTX 1080 is now utilized to 90-95% (even with WDDM) which is an amazing plus of 10% compared to other long runs.

Having said this, the total computing time is from dissapointing 7-8 hours (I run two concurrent tasks) which is slower than the 770 and 780ti below mentioned. How can this be?
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,601,711,851
RAC: 8,774,809
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45862 - Posted: 23 Dec 2016 | 14:38:27 UTC - in response to Message 45861.

How can this be?

Because Erich has re-activated a two year old thread relating to the original research run. The parameters for the new 'pre-publication review questions' run - more properly discussed in the new thread - are likely to have been set differently, to take advantage of the increased computing power available in 2016 compared with what we were running in 2014.

3de64piB5uZAS6SUNt1GFDU9d...
Avatar
Send message
Joined: 20 Apr 15
Posts: 285
Credit: 1,102,216,607
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwat
Message 45864 - Posted: 23 Dec 2016 | 14:54:20 UTC
Last modified: 23 Dec 2016 | 14:55:29 UTC

I see... looked at that way the 8 hours of the 1080 seem to be a good value. I wonder what time the "older" hi-end GPUs like a 780ti, 980 and 980ti need to finish this job. Just from the specs, the Pascal should have a major advantage over these cards, but in practice a 1080 was hardly faster. At least as far as regular long runs concerned.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 45867 - Posted: 23 Dec 2016 | 16:51:09 UTC - in response to Message 45864.

I wonder what time the "older" hi-end GPUs like a 780ti, 980 and 980ti need to finish this job.

My 980ti's crunch these tasks in slightly above 10 hours (no WDDM).

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45868 - Posted: 23 Dec 2016 | 16:57:03 UTC - in response to Message 45861.

At first view it seems that Pascal cards may benefit from this kind of job. My GTX 1080 is now utilized to 90-95% (even with WDDM) which is an amazing plus of 10% compared to other long runs.

I just started up my GTX 970 running SDOERR_BNBS-2-4-RND5876_0 under Ubuntu 16.10.
At 1.5 hours, it shows 8.8% complete, or 17 hours total.

I am not sure how the Nvidia X Server Settings compare to GPU-Z, but it shows a Graphics Clock of 1366 MHz and a GPU Utilization of about 95-96%. So I think that is the card to use for me; even the 960s are a bit slow for this series.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1090
Credit: 6,603,906,926
RAC: 21,893,126
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 45871 - Posted: 23 Dec 2016 | 17:29:04 UTC

The GTX 970 in one my PCs needs between 17 and 18 hours for these jobs;
GPU clock is set at 1390MHz, Memory at 3505MHz (with NVIDIA Inspector), the GPU load, however, is not more than 88-89%, which may be due to the older CPU (Intel Core 2 Duo E8400 @ 3,74GHz).

Profile caffeineyellow5
Avatar
Send message
Joined: 30 Jul 14
Posts: 225
Credit: 2,658,976,345
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwat
Message 45886 - Posted: 24 Dec 2016 | 4:25:38 UTC - in response to Message 45864.

I see... looked at that way the 8 hours of the 1080 seem to be a good value. I wonder what time the "older" hi-end GPUs like a 780ti, 980 and 980ti need to finish this job. Just from the specs, the Pascal should have a major advantage over these cards, but in practice a 1080 was hardly faster. At least as far as regular long runs concerned.

I run 2 at a time per card on my 980Tis and these are running 17-21 hours each so far. But the credit reward is amazing!
____________
1 Corinthians 9:16 "For though I preach the gospel, I have nothing to glory of: for necessity is laid upon me; yea, woe is unto me, if I preach not the gospel!"
Ephesians 6:18-20, please ;-)
http://tbc-pa.org

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 45891 - Posted: 24 Dec 2016 | 11:20:07 UTC - in response to Message 45886.

1060-3GB 18h 20min on Linux, with mostly 1 CPU task running. GPU ran just shy of stock with fan speed increased. Utilization was up to 98%. Would likely perform a bit better on a higher end system. Seemed a bit less susceptible to CPU usage. Did take 30% longer with 3/3 CPU tasks running, but with other WU's GPU utilization would have dropped to 60% with just 2 CPU tasks running and with 3 running it was a wast of time (10-20% GPU usage). Good credit as well. Overall these are very friendly tasks.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Post to thread

Message boards : News : WU: BARNA

//