Advanced search

Message boards : News : More CPU jobs

Author Message
Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49769 - Posted: 3 Jul 2018 | 11:59:53 UTC

...with a new and improved application (Linux only). The current version should eliminate dependencies on gcc and devel libraries.

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49770 - Posted: 3 Jul 2018 | 12:26:20 UTC - in response to Message 49769.

By the way, the new app downloads updated libraries. Feel free to reset the project to free up disk space taken by the old ones.

Erich56
Send message
Joined: 1 Jan 15
Posts: 471
Credit: 2,331,035,852
RAC: 1,929,062
Level
Phe
Scientific publications
watwatwatwat
Message 49771 - Posted: 3 Jul 2018 | 19:15:00 UTC

why are these CPU jobs for Linux only, and not for Windows, too?

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 110
Credit: 71,367,213
RAC: 570,694
Level
Thr
Scientific publications
wat
Message 49773 - Posted: 4 Jul 2018 | 4:30:49 UTC - in response to Message 49771.

Because they can make the app work under Linux but are not successful yet in creating a Windows app that works.

Erich56
Send message
Joined: 1 Jan 15
Posts: 471
Credit: 2,331,035,852
RAC: 1,929,062
Level
Phe
Scientific publications
watwatwatwat
Message 49774 - Posted: 4 Jul 2018 | 5:38:24 UTC - in response to Message 49773.

Because they can make the app work under Linux but are not successful yet in creating a Windows app that works.

hm, this makes we wonder why it is so much more difficult to create an app for Windows than for Linux ...

further, an easy way to solve this would be to have the Linux app run in a Virtual Machine (like, for example, LHC is doing it for some of it's sub-projects).

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49776 - Posted: 4 Jul 2018 | 6:25:59 UTC - in response to Message 49774.
Last modified: 4 Jul 2018 | 7:35:51 UTC

Making boinc apps is like building a ship in a bottle, in the sense that your tools are very limited and you don't control the environment. In the case of windows the bottle is dark. ;)

Edit. Jokes apart, we would prefer to have the app without VM, which adds opacity and size. QM uses Python, which we try to ship with WUs in the most self-contained way possible. This means that several components have to fall into place. When they don't, the reason is usually guesswork, and search of workarounds.

captainjack
Send message
Joined: 9 May 13
Posts: 141
Credit: 953,287,152
RAC: 274,097
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 49780 - Posted: 4 Jul 2018 | 14:41:16 UTC

Erich56 said:

further, an easy way to solve this would be to have the Linux app run in a Virtual Machine


Erich56, if you want to run the Linux app in a Virtual Machine, you can create your own virtual machine, install Linux and BOINC, then run the QC tasks from there. That is what I have done on my Windows machines and it works fine.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 49781 - Posted: 4 Jul 2018 | 14:54:39 UTC
Last modified: 4 Jul 2018 | 14:55:06 UTC

I have been running CERN LHC@home Virtual Machines for more than ten years, and I have been rewarded with a CERN Polo Shirt. But yes, they do present some problems. Now your CPU tasks seem to run fine on my old SUN Workstation with SuSE Leap 42.3 Linux.
Tullio

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 99
Credit: 252,641
RAC: 344
Level

Scientific publications
wat
Message 49783 - Posted: 5 Jul 2018 | 9:50:41 UTC - in response to Message 49780.

That is what I have done on my Windows machines and it works fine.


+1
Virtual box on my Win10.
But i think it's not the best solution for performance....

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49784 - Posted: 5 Jul 2018 | 10:02:41 UTC - in response to Message 49783.

As far as I know virtualization is almost native speed these days, especially for computing.

AuxRx
Send message
Joined: 3 Jul 18
Posts: 7
Credit: 249,047
RAC: 296
Level

Scientific publications
wat
Message 49785 - Posted: 5 Jul 2018 | 12:01:51 UTC - in response to Message 49784.

The recent batch of CPU WUs seems to be done. Will there be more soon?

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 49786 - Posted: 5 Jul 2018 | 12:20:39 UTC - in response to Message 49785.

Yes, I am making some now. I'll try to submit new ones today

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 49787 - Posted: 5 Jul 2018 | 14:24:27 UTC

Sorry, sorry, sorry I messed up due to a small mistake. Had to nuke the WUs. Redoing them now.

AuxRx
Send message
Joined: 3 Jul 18
Posts: 7
Credit: 249,047
RAC: 296
Level

Scientific publications
wat
Message 49788 - Posted: 5 Jul 2018 | 14:53:52 UTC - in response to Message 49787.

No issue at all. I'm glad the team communicates openly.

Thank you for the heads-up.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 399
Credit: 2,743,869,442
RAC: 1,108,426
Level
Phe
Scientific publications
watwat
Message 49789 - Posted: 5 Jul 2018 | 23:55:14 UTC

All of my Stefan CPU WUs are stuck at 10% and I aborted them after about 4 hours. This is the machine (16.04 LTS) that has never had any issues with pretty much any of the WUs.

Here is the Error page: https://www.gpugrid.net/results.php?hostid=424454&offset=0&show_names=0&state=5&appid=

mmonnin
Send message
Joined: 2 Jul 16
Posts: 173
Credit: 288,961,689
RAC: 53,616
Level
Asn
Scientific publications
wat
Message 49790 - Posted: 6 Jul 2018 | 2:35:38 UTC

Holy cow the website is SOO SLOW. I had to use a proxy in Sweden to just get anything to load. I can't even get tasks even though the site says there are plenty.

1 error recently:
http://gpugrid.net/result.php?resultid=18003621

Why is the project requesting 28 GB of disk space? I see a three files stuck downloading that are less than 10kb combined. Project folder and slot folders are nowhere close to 28GB.
GPUGRID 7/5/2018 10:18:01 PM Message from server: Quantum Chemistry, beta test needs 17080.43MB more disk space. You currently have 11529.80 MB available and it needs 28610.23 MB.

These are taking over 4.2-4.5 GB of memory? I thought this was much lower before? That PC has 128GB but Mint split the disk space so not a lot of disk space. But 28 GB is a bit much.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 49791 - Posted: 6 Jul 2018 | 7:39:23 UTC

GPUGRID is taking 3.34 GB of disk space on my main Linux host, 3.90 on a Linux laptop. On the same laptop LHC@home is taking 5.75 GB.
Tullio

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 49792 - Posted: 6 Jul 2018 | 8:09:46 UTC - in response to Message 49791.
Last modified: 6 Jul 2018 | 8:11:48 UTC

Yes this is the stuff I resent to the beta queue I guess. They are much larger molecules so they were crashing on the QM queue cause they ran out of scratch space. I have seen them use up to 18GB scratch space so at the moment I don't know yet how to run these on GPUGRID as it seems to be an issue with many users.

I doubt they were really stuck, they are just much slower to compute.

kain
Send message
Joined: 3 Sep 14
Posts: 139
Credit: 218,773,560
RAC: 261,078
Level
Leu
Scientific publications
watwatwatwatwat
Message 49793 - Posted: 6 Jul 2018 | 10:17:16 UTC

http://gpugrid.net/results.php?hostid=470907

I have a lot of errors :(

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 49794 - Posted: 6 Jul 2018 | 10:46:21 UTC

Must we update conda?
Please update conda by running

$ conda update -n base conda
Tullio

Jim1348
Send message
Joined: 28 Jul 12
Posts: 616
Credit: 1,199,706,322
RAC: 89,224
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 49795 - Posted: 6 Jul 2018 | 10:51:26 UTC - in response to Message 49792.

They are much larger molecules so they were crashing on the QM queue cause they ran out of scratch space. I have seen them use up to 18GB scratch space so at the moment I don't know yet how to run these on GPUGRID as it seems to be an issue with many users.

The most recent Betas have worked OK for me. But I have 32 GB memory, which may help.
http://www.gpugrid.net/results.php?hostid=334241&offset=0&show_names=0&state=0&appid=35

You could set up a special sub-project for the large molecules if you want to.

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49796 - Posted: 6 Jul 2018 | 11:38:18 UTC - in response to Message 49795.
Last modified: 6 Jul 2018 | 11:40:14 UTC

We are using QC beta to test large molcules and how much disk space they take. I think they can (temporarily of course) go up to 20 GB of space (!). I am not sure about RAM - they should be < 4 GB.

We'll need to think whether to split the app in large and small.

Regarding conda: it's an "internal" installation. So no, you can't update, and you shouldn't. That message is unfortunate.

Best

kain
Send message
Joined: 3 Sep 14
Posts: 139
Credit: 218,773,560
RAC: 261,078
Level
Leu
Scientific publications
watwatwatwatwat
Message 49797 - Posted: 6 Jul 2018 | 11:47:00 UTC - in response to Message 49796.

We are using QC beta to test large molcules and how much disk space they take. I think they can (temporarily of course) go up to 20 GB of space (!). I am not sure about RAM - they should be < 4 GB.

We'll need to think whether to split the app in large and small.

Regarding conda: it's an "internal" installation. So no, you can't update, and you shouldn't. That message is unfortunate.

Best


Well... So my Threadripper could need up to 160GB of space?! It has just 32GB...

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49798 - Posted: 6 Jul 2018 | 12:15:16 UTC - in response to Message 49797.
Last modified: 6 Jul 2018 | 12:16:05 UTC


Well... So my Threadripper could need up to 160GB of space?! It has just 32GB...


We are talking about DISK space. Only a few WUs will be that big - unless we make a "big" queue.

T
[/quote]

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 99
Credit: 252,641
RAC: 344
Level

Scientific publications
wat
Message 49799 - Posted: 6 Jul 2018 | 12:26:41 UTC - in response to Message 49784.

As far as I know virtualization is almost native speed these days, especially for computing.


Yes, if you are using "hard" virtualization like Esx and Hyper-v.
"Soft" virtualization like VirtualBox or VmPlayer may suffer bottlenecks

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 270
Credit: 764,015,477
RAC: 1,296,335
Level
Glu
Scientific publications
watwat
Message 49800 - Posted: 6 Jul 2018 | 12:26:59 UTC - in response to Message 49796.
Last modified: 6 Jul 2018 | 12:27:24 UTC

We are using QC beta to test large molcules and how much disk space they take.


Toni, if I may ask you, what molecule size are we (roughly) talking about? As you know, because of my son I have personal interest in HCF1 research, and I would like to get a feel how far science is still away from handling that large molecules. Thanks in advance and my apologies for coming up with my personal issues once in a while.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 49801 - Posted: 6 Jul 2018 | 12:54:37 UTC

I have a degree in Theoretical physics obtained in 1967, but that was related to elementary particle physics. Then in the Nineties, while at Trieste Area Science Park as manager of a UNIX BULL Laboratory I attended a few lectures in the UN Center for Genetic Engineering and Biotechnology on the Density Functional Theory. Since retirement, I have run a few BOINC projects including one on Monte Carlo Method applied to Quantum Chemistry but it no longer exists. This is the first time I am running a project which uses Neural Networks.
Tullio

kain
Send message
Joined: 3 Sep 14
Posts: 139
Credit: 218,773,560
RAC: 261,078
Level
Leu
Scientific publications
watwatwatwatwat
Message 49802 - Posted: 6 Jul 2018 | 13:35:39 UTC - in response to Message 49798.


Well... So my Threadripper could need up to 160GB of space?! It has just 32GB...


We are talking about DISK space. Only a few WUs will be that big - unless we make a "big" queue.

T
[/quote]

I'm also talking about disk space. I'm using 32GB Optane module as boot drive. It's time to change it for something bigger.

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49804 - Posted: 6 Jul 2018 | 15:25:04 UTC - in response to Message 49802.

@kain: the disk space is used in the directory BOINC is running. Usually (if you use the distribution installers) it is the in the disk used at the root of the file system, indeed.

STARBASEn
Avatar
Send message
Joined: 17 Feb 09
Posts: 48
Credit: 650,258,922
RAC: 1,285,184
Level
Lys
Scientific publications
watwatwatwatwat
Message 49805 - Posted: 6 Jul 2018 | 16:08:38 UTC

I had to enlarge the root partition to accommodate the QC beta 3.31 since boinc from the Fedora distro by default installs in /var/lib/boinc and runs as a daemon under systemctl. After that, the WU's seemed to run fine but they sure ate up a lot of RAM. Both my 8 core's have 16 GB RAM and I was running them 2 concurrent with 4 cores each. I think only two errored out and the rest completed and validated. Guess I'll have to max my 8 core machines with 32 GB RAM to run the bigger molecules.
____________

Crunching since Feb 2003 (United Devices, Find-a-Drug)

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49806 - Posted: 6 Jul 2018 | 18:35:16 UTC - in response to Message 49805.

Are you able to set the disk limit in the boinc preferences to prevent too many WUs from running?

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 883
Credit: 1,734,625,070
RAC: 1,199,858
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 49807 - Posted: 6 Jul 2018 | 18:39:35 UTC - in response to Message 49806.

Easier and more precise simply to set max_concurrent in app_config.xml

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49808 - Posted: 6 Jul 2018 | 18:52:39 UTC - in response to Message 49807.
Last modified: 6 Jul 2018 | 18:56:07 UTC

Great to know, thanks. Actually I was also wondering if boinc respects the disk limits .

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 883
Credit: 1,734,625,070
RAC: 1,199,858
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 49809 - Posted: 6 Jul 2018 | 19:08:30 UTC - in response to Message 49808.

Worth it to perform the experiment, certainly. Possibly depends whether it respects the declared space needed (<rsc_disk_bound>), or the actual space used. If the latter, there might be a problem if the actual usage increases gradually during the run - BOINC might only check it when deciding whether to start a(nother) new task. Lots of fun to be had with those possibilities...

STARBASEn
Avatar
Send message
Joined: 17 Feb 09
Posts: 48
Credit: 650,258,922
RAC: 1,285,184
Level
Lys
Scientific publications
watwatwatwatwat
Message 49810 - Posted: 6 Jul 2018 | 19:41:46 UTC

@Toni, Yes boinc does appear to respect the client disk settings as it lets one know if disk space is too low to run certain projects in the event log. I usually set a high arbitrary GB size but the client appears to react to the real amount available in the execution partition and uses the percentage limits to notify user when disk space is too low. I had to readjust the percent limits higher (in the client settings) a few days ago to run the 3.30 app on one of my machines. Probably due to the project directory getting too full. I hate to reset the project and loose WU's but I suppose I will have to eventually.

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 270
Credit: 764,015,477
RAC: 1,296,335
Level
Glu
Scientific publications
watwat
Message 49811 - Posted: 6 Jul 2018 | 19:45:40 UTC - in response to Message 49800.

We are using QC beta to test large molcules and how much disk space they take.


Toni, if I may ask you, what molecule size are we (roughly) talking about? As you know, because of my son I have personal interest in HCF1 research, and I would like to get a feel how far science is still away from handling that large molecules. Thanks in advance and my apologies for coming up with my personal issues once in a while.


I get it. Thank you so much for your help.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 49812 - Posted: 6 Jul 2018 | 22:09:26 UTC - in response to Message 49800.
Last modified: 6 Jul 2018 | 22:12:00 UTC


Toni, if I may ask you, what molecule size are we (roughly) talking about?


The molecules for QM are max 50 atoms or so. The size is however not very indicative. This is a specific "chemistry-oriented" type of calculations.

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 270
Credit: 764,015,477
RAC: 1,296,335
Level
Glu
Scientific publications
watwat
Message 49813 - Posted: 6 Jul 2018 | 22:49:31 UTC - in response to Message 49812.


Toni, if I may ask you, what molecule size are we (roughly) talking about?


The molecules for QM are max 50 atoms or so. The size is however not very indicative. This is a specific "chemistry-oriented" type of calculations.


Thank you VERY much for that line, I really appreciate that. I was already afraid of being a constant bother. Of course I understand that we are still years or even decades away from handling huge proteins like HCF1 and I don't want to be obtrusive. Having said that, I would like to keep sight of those long term targets.

Thanks again... if I may, I will get back to you with this question in a couple of years. But I am glad that Gpugrid and its team is more than just being "exclusively academic". There actually is a vision of the future we can believe in.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 49814 - Posted: 6 Jul 2018 | 23:18:18 UTC

On an EDX online course on quantum computers which I followed recently there was a professor at Dartmouth University who uses a quantum computer to do quantum chemistry calculations.
Tullio

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 49820 - Posted: 9 Jul 2018 | 7:47:14 UTC - in response to Message 49813.

Hey JoergF, while we are not doing proteins with QM yet, (some other groups are trying to do that with networks), what we are calculating is directly related to drug design so I think it is very relevant.
To do whole proteins at QM level accuracy I think will take quite a few more years.

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 270
Credit: 764,015,477
RAC: 1,296,335
Level
Glu
Scientific publications
watwat
Message 49822 - Posted: 9 Jul 2018 | 12:27:44 UTC - in response to Message 49820.
Last modified: 9 Jul 2018 | 12:30:02 UTC

Thank you very much. Which kind of contribution will help you most in order to make progress on proteins (in the long run of course)? Because I am just considering whether to buy an additional GPU or CPU this autumn.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 49823 - Posted: 9 Jul 2018 | 13:03:52 UTC - in response to Message 49822.

We as a group are not really focusing on applying QM to proteins. The problem being double:

a) QM-trained neural networks are much faster than QM but still slower than MD (molecular dynamics). MD which we can use already is in many cases too slow to study big proteins. So I personally don't see much of a reason yet to simulate proteins with QM.

b) There are other groups working on applying them to proteins but this will be a very difficult challenge over many years.

My focus right now is to instead improve MD so that it's more reliable in its predictions. MD is done in GPU in this project and QM for the moment on CPU. But it would be hard for me to tell you which to prioritize as both really help in the end.

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 270
Credit: 764,015,477
RAC: 1,296,335
Level
Glu
Scientific publications
watwat
Message 49824 - Posted: 9 Jul 2018 | 16:20:54 UTC - in response to Message 49823.
Last modified: 9 Jul 2018 | 16:24:38 UTC

Thank you... no problem. So we just keep on crunching on all sides and see where the road leads us to. :)
By the way, congratulations to all of you guys at Gpugrid on your great work.
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 49825 - Posted: 10 Jul 2018 | 12:46:41 UTC

I'm trying out the AMD EPYC trial from Packet, it runs 48 QM WUs at a time... all valid. I'm thinking of leaving it crunching these instead of Rosetta (the rest of my PCs all run Windows...). Hope it helps!
____________

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 399
Credit: 2,743,869,442
RAC: 1,108,426
Level
Phe
Scientific publications
watwat
Message 49830 - Posted: 10 Jul 2018 | 15:26:11 UTC - in response to Message 49825.
Last modified: 10 Jul 2018 | 15:26:36 UTC

I'm trying out the AMD EPYC trial from Packet, it runs 48 QM WUs at a time... all valid. I'm thinking of leaving it crunching these instead of Rosetta (the rest of my PCs all run Windows...). Hope it helps!

Wow! That's a lot of compute!

JoergF
Avatar
Send message
Joined: 20 Apr 15
Posts: 270
Credit: 764,015,477
RAC: 1,296,335
Level
Glu
Scientific publications
watwat
Message 49831 - Posted: 10 Jul 2018 | 15:29:52 UTC - in response to Message 49830.

I'm trying out the AMD EPYC trial from Packet, it runs 48 QM WUs at a time... all valid. I'm thinking of leaving it crunching these instead of Rosetta (the rest of my PCs all run Windows...). Hope it helps!

Wow! That's a lot of compute!


Epyc with 48 threads ... I go green with envy :-))
____________
I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Thomas
Send message
Joined: 23 Feb 17
Posts: 15
Credit: 267,820,364
RAC: 123,382
Level
Asn
Scientific publications
wat
Message 49834 - Posted: 10 Jul 2018 | 17:39:17 UTC - in response to Message 49831.

48 QM WUs are 192 (CPU) threads. I need 4 computers to reach that.

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 49839 - Posted: 10 Jul 2018 | 21:16:21 UTC - in response to Message 49834.
Last modified: 10 Jul 2018 | 21:17:57 UTC

48 QM WUs are 192 (CPU) threads. I need 4 computers to reach that.


I just realized from your comment that it actually crunches 12 WUs at a time (I just saw all 48 threads running @ 100% immediately thinking it was running 48 WUs just like Rosetta)


I am not a smart man.
____________

morgan
Send message
Joined: 25 Sep 08
Posts: 1
Credit: 16,995,041
RAC: 4
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 49840 - Posted: 10 Jul 2018 | 21:23:02 UTC

Well i run QC on a 2core pc just for fun :-)

But now i get; No task sent
This computer has finished a daily quota of 8 task


That´s not fun!!

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 1944
Credit: 12,307,228,169
RAC: 3,478,311
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 49842 - Posted: 10 Jul 2018 | 21:43:24 UTC - in response to Message 49825.
Last modified: 10 Jul 2018 | 21:44:21 UTC

I'm trying out the AMD EPYC trial from Packet, it runs 48 QM WUs at a time... all valid.
To everybody using hyper-threaded CPUs for crunching:
You should test how well the given app scales with HT on or off on your system. The other approach is leave HT on, but lower the percentage of the usable CPUs in BOINC manager (down to 50%). Too many simultaneous memory intensive apps would cause too many cache misses, resulting in degraded combined performance. With HT off (or by setting the usable CPUs to 50%) calculation time should be halved (due that two threads have one FPU). If it's more than a half, then the number of usable CPUs could be increased, while the RAC has risen accordingly (= in a direct ratio).
I can't test it myself until the Windows app has been released, but I'm interested.
A simultaneous GPU task also could degrade the performance of the CPU tasks and vice versa.

mmonnin
Send message
Joined: 2 Jul 16
Posts: 173
Credit: 288,961,689
RAC: 53,616
Level
Asn
Scientific publications
wat
Message 49843 - Posted: 10 Jul 2018 | 23:28:08 UTC

Most tasks benefit from HT but I only recall one doing better overall with HT off on my 2670v1s.

The new Threadrippers with 4 die but only 2 with direct memory access, meaning 2 die have to always make a hop to access memory, would make for some interesting testing.

AuxRx
Send message
Joined: 3 Jul 18
Posts: 7
Credit: 249,047
RAC: 296
Level

Scientific publications
wat
Message 49856 - Posted: 13 Jul 2018 | 14:09:11 UTC - in response to Message 49842.
Last modified: 13 Jul 2018 | 14:13:35 UTC

I have a related question I cannot answer myself.

I am seeing very unsteady CPU and memory usage with QC. CPU% drops to 70-80%, with MEM% dropping as well, (however they are defined by ubuntu) after a few seconds, then both peak again.

1) Is this a function of the algorithm or a limitation of my system?

2) Can I limit the number of cores used by QC?

As it happens I've found an issue today where BOINC crunches 3 WCG and one 4 core QC task at the same time on a four core processor. CPU usage has been stuck at 7.x for an hour at least.

QC uses 4 cores by default and I haven't found an option to limited the number of cores. Also, QC seems to be hogging my CPU at most times, eventhough it has been added a week ago. This has led to a bottleneck with WCG tasks due soon.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 616
Credit: 1,199,706,322
RAC: 89,224
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 49857 - Posted: 13 Jul 2018 | 17:13:51 UTC - in response to Message 49856.

2) Can I limit the number of cores used by QC?

As it happens I've found an issue today where BOINC crunches 3 WCG and one 4 core QC task at the same time on a four core processor. CPU usage has been stuck at 7.x for an hour at least.

QC uses 4 cores by default and I haven't found an option to limited the number of cores. Also, QC seems to be hogging my CPU at most times, eventhough it has been added a week ago. This has led to a bottleneck with WCG tasks due soon.

Use an "app_config.xml" file to limit the number of cores per work unit, and also the number of QC work units running if you wish.
http://www.gpugrid.net/forum_thread.php?id=4748&nowrap=true#49369

I have found that QC is tough on resources too. Even though I reserved a CPU core to support a GTX 1070 on Folding, running QC still caused a drop in Folding points, showing that the GPU was being starved for CPU support.

To fix that, I now run only six cores of my i7-4770 on CPU work, and leave two cores to support the GPU. But even that was not enough, so I run 4 cores on QC (two work units running two cores each) with the other two on LHC/native ATLAS. That frees up enough CPU resources so that I see only a minimal drop in Folding points.

AuxRx
Send message
Joined: 3 Jul 18
Posts: 7
Credit: 249,047
RAC: 296
Level

Scientific publications
wat
Message 49858 - Posted: 13 Jul 2018 | 19:30:15 UTC - in response to Message 49857.

Use an "app_config.xml" file to limit the number of cores per work unit, and also the number of QC work units running if you wish.


Thanks Jim1348, I just tried, but without improvement. Seems to be connected to the algorithm.

mmonnin
Send message
Joined: 2 Jul 16
Posts: 173
Credit: 288,961,689
RAC: 53,616
Level
Asn
Scientific publications
wat
Message 49859 - Posted: 14 Jul 2018 | 3:03:35 UTC - in response to Message 49858.

Use an "app_config.xml" file to limit the number of cores per work unit, and also the number of QC work units running if you wish.


Thanks Jim1348, I just tried, but without improvement. Seems to be connected to the algorithm.


You must tell BOINC to reread configs to pick up the changes. Tasks already downloaded will still say 4c even. Only new ones will say 1c or 2c but all will run at your new setting.

Its a BOINC thing to sometimes squeeze in more tasks than cores. I've seen it happen on my 3570k when a single threaded task completes and a 4c tasks starts it will show more running for a but but it eventually corrects.

AuxRx
Send message
Joined: 3 Jul 18
Posts: 7
Credit: 249,047
RAC: 296
Level

Scientific publications
wat
Message 49860 - Posted: 14 Jul 2018 | 14:03:48 UTC - in response to Message 49859.

You must tell BOINC to reread configs to pick up the changes. Tasks already downloaded will still say 4c even. Only new ones will say 1c or 2c but all will run at your new setting.


I did, but it still required a reboot. Tasks that were 4 core previously appeared as x core after a reboot and were crunched as such as well. Credit might take a hit, but I didn't mind for this test.

Its a BOINC thing to sometimes squeeze in more tasks than cores. I've seen it happen on my 3570k when a single threaded task completes and a 4c tasks starts it will show more running for a but but it eventually corrects.


I thought that was the issue, but it wasn't. I even suspended one/several/all QC task, but if/when BOINC could start another task it always did, despite CPU% >400%. CPU% stayed >700% (i.e. 7 tasks running) for an hour plus.

It is working now.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 399
Credit: 2,743,869,442
RAC: 1,108,426
Level
Phe
Scientific publications
watwat
Message 49968 - Posted: 20 Jul 2018 | 10:50:05 UTC

Looks like the CPU WU Queue is almost running dry

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 49969 - Posted: 20 Jul 2018 | 12:34:52 UTC - in response to Message 49968.

Holidays...mumble...something...something...holidays :D hahah. I restocked them now. From Monday I'll be back working so I'll take more care of my WUs

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50017 - Posted: 23 Jul 2018 | 18:52:33 UTC

I downloaded 4 QC tasks on my Windows 10 PC and of course they failed. But why the server sends me QC tasks on a Windows PC?
Tullio

Erich56
Send message
Joined: 1 Jan 15
Posts: 471
Credit: 2,331,035,852
RAC: 1,929,062
Level
Phe
Scientific publications
watwatwatwat
Message 50019 - Posted: 23 Jul 2018 | 19:02:45 UTC - in response to Message 50017.

I downloaded 4 QC tasks on my Windows 10 PC and of course they failed. But why the server sends me QC tasks on a Windows PC?
Tullio

same is true for GPU tasks - one can download them on a Windows OS, and they fail after a few seconds.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 399
Credit: 2,743,869,442
RAC: 1,108,426
Level
Phe
Scientific publications
watwat
Message 50273 - Posted: 22 Aug 2018 | 11:00:04 UTC

Just a notice to Stefan, only a few days left of CPU WUs in the queue.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50276 - Posted: 23 Aug 2018 | 7:11:46 UTC - in response to Message 50273.
Last modified: 23 Aug 2018 | 7:12:10 UTC

Thanks, I noticed :) I'm in the process of creating new WUs but the issue is that they are more demanding than the last ones so we are trying to figure out ways to make them use less disk at the cost of more computation time because the largest one used 50GB of scratch space to calculate.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50277 - Posted: 23 Aug 2018 | 9:25:49 UTC - in response to Message 50276.

My HP Linux laptop running SuSE Leap 15.0 after Leap 42.3 (any relationship to SLES 15.0 ?) has 752.37 GB available to BOINC. Instead my older SUN WS running SuSE Leap 42.3 has at most 30 GB available to BOINC 7.8.3 of a 1 TB disk.
Tullio

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50278 - Posted: 23 Aug 2018 | 10:39:53 UTC

So the ones I am sending out now should use maximum around 6GB scratch space on /tmp/. If you hit any problems feel free to report here.

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 50280 - Posted: 23 Aug 2018 | 15:05:03 UTC - in response to Message 50278.
Last modified: 23 Aug 2018 | 16:58:32 UTC

Minor note: barring changes I am unaware of, the scratch space used during the run is in the slot directory. (/tmp is limited on many systems)

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50281 - Posted: 23 Aug 2018 | 15:34:05 UTC - in response to Message 50280.

I have a QC task running on my Linux laptop. It is at 73% after 9;07;27 hours. But its slot is empty.
Tullio

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50282 - Posted: 24 Aug 2018 | 4:59:55 UTC
Last modified: 24 Aug 2018 | 5:58:48 UTC

Two QC tasks failed on my main Linux box which has a 30 GB limit to BOINC 7.8.3 with the same message DISK USAGE LIMIT EXCEEDED. GPU task running fine on its GTX 750 Ti at 61 C.
Tullio
Same error after 55 minutes also on the laptop despite the 750 GB available to BOINC. But where is that scratch file? I cannot see in either in /tmp or /var/lib/boinc/slots.
____________

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50283 - Posted: 24 Aug 2018 | 6:32:50 UTC - in response to Message 50282.
Last modified: 24 Aug 2018 | 6:36:41 UTC

OK I have a feeling we hit a file-size limit of BOINC and not of the drives. I'll chat it up with Toni and see what we can do.

For the moment I'll cancel them since they nearly all fail with the same disk usage error.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 883
Credit: 1,734,625,070
RAC: 1,199,858
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50285 - Posted: 24 Aug 2018 | 12:15:43 UTC - in response to Message 50283.

OK I have a feeling we hit a file-size limit of BOINC and not of the drives. I'll chat it up with Toni and see what we can do.

For the moment I'll cancel them since they nearly all fail with the same disk usage error.

Every workunit sent out by a BOINC server has an associated value

<rsc_disk_bound>

(in bytes). That value - set by the project - has to be large enough to accommodate all anticipated disk usage. If you use more than you've declared in advance, 'DISK USAGE LIMIT EXCEEDED' is exactly the error message you'd expect.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50292 - Posted: 26 Aug 2018 | 1:26:48 UTC
Last modified: 26 Aug 2018 | 1:27:23 UTC

Completed and validated two QC task on my main Linux host. A GPU task is running on it at 1202 MHz clock,5400 MHz memory transfer, temperature 61 C on its GTX 750 Ti, driver 384.111.
Tullio
____________

Erich56
Send message
Joined: 1 Jan 15
Posts: 471
Credit: 2,331,035,852
RAC: 1,929,062
Level
Phe
Scientific publications
watwatwatwat
Message 50293 - Posted: 26 Aug 2018 | 6:37:14 UTC - in response to Message 50292.

Completed and validated two QC task on my main Linux host.

It's really too bad that QC is not available for Windows :-(

Thomas
Send message
Joined: 23 Feb 17
Posts: 15
Credit: 267,820,364
RAC: 123,382
Level
Asn
Scientific publications
wat
Message 50294 - Posted: 26 Aug 2018 | 8:24:40 UTC - in response to Message 50293.

just get Linux installed.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50296 - Posted: 26 Aug 2018 | 17:14:28 UTC

Two more QC tasks completed, two ready to start. Thanks.
Tullio
____________

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50301 - Posted: 27 Aug 2018 | 7:04:58 UTC

CPU usage reaches 197% on my old Opteron 1210 with 2 cores, 145% when a GPU task is also running. RAM is 8 GB.
Tullio

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50302 - Posted: 27 Aug 2018 | 7:12:36 UTC

Sorry, but these are still the same old QC jobs. Thanks for the reports but it should not have changed much. I had to put some more of the old ones on queue while we fix the app space configuration so that I can send the new "SELE*" workunits.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50306 - Posted: 28 Aug 2018 | 7:51:01 UTC
Last modified: 28 Aug 2018 | 7:51:23 UTC

I canceled remaining QMML50_2 jobs because I found out that some of them might be duplicates of already calculated WUs since there was a minor issue when retrieving them which left some behind. I am redoing the calculation of missing WUs now to make sure the ones I send out are correct. It might take me a day so please be patient.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50307 - Posted: 28 Aug 2018 | 11:07:33 UTC

SELE2 WUs are being sent out now. Toni increased the allowed space of the app to 30GB. From my tests the WUs should not use more than 6GB space each (the largest molecule). If you run many in parallel you might hit the limit though? I'm not certain about that.

Let's see how it goes this time!

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50308 - Posted: 28 Aug 2018 | 11:43:25 UTC - in response to Message 50301.

CPU usage reaches 197% on my old Opteron 1210 with 2 cores, 145% when a GPU task is also running. RAM is 8 GB.
Tullio


Wouldn't it be more energy efficient to run a newer CPU? It's 100W for 2 Cores @ 1Ghz.

Unless your electricity is free :D
____________

Thomas
Send message
Joined: 23 Feb 17
Posts: 15
Credit: 267,820,364
RAC: 123,382
Level
Asn
Scientific publications
wat
Message 50309 - Posted: 28 Aug 2018 | 12:47:07 UTC

new WUs don't seem to work: they consume a lot of memory, throw computation errors or just rest at 10% progress forever.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50310 - Posted: 28 Aug 2018 | 12:57:51 UTC

I see 89 successes and 17 errors. Seems ok for a start. I'll look into the errors but they don't seem to be broken as a whole.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50311 - Posted: 28 Aug 2018 | 12:59:34 UTC
Last modified: 28 Aug 2018 | 13:00:12 UTC

Actually 14 out of the total 17 failures are on your machines Thomas so it might be specific to your case. Generally they seem ok.
They should use only 4GB of memory each WU.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50312 - Posted: 28 Aug 2018 | 13:00:45 UTC - in response to Message 50308.
Last modified: 28 Aug 2018 | 13:20:56 UTC

It's running at 1.8 GHz and I have a 1220 Opteron in my drawers at 2.8 GHz. It's been running since January 2008.My electricity costs me 0.21 euro /kWh and I have 3 computers running 24/7, this Opteron, an AMD E-450 and a A10-6700 which should have 4 cores but Windows Task Manager says 2 cores and 4 logical processors. My total electricity expenditure is about 60 euro/month.
Tullio
I forgot to mention my ulefone smart phone with its arm64-v8a CPU running Android 7.1.1 on SETI@home and Einstein@home.

Profile Conan
Send message
Joined: 25 Mar 09
Posts: 24
Credit: 427,321
RAC: 522
Level

Scientific publications
wat
Message 50314 - Posted: 28 Aug 2018 | 13:02:27 UTC

I have an Intel 8 core (16 thread) Xeon server that has a 146 GB disk drive (has 2 of them but one died). It also has 24GB RAM.
WUs are allowed to run with 8 cores.

I am getting the message that I need 28610.23 MB Disk Space, I currently have 9486.42 MB spare, so it needs another 19123.81 MB of Disk Space.

I leave 10GB that BOINC can't use, other programmes use 12.69 GB, BOINC is using 17.02 GB.

Of that 17.02 GB that BOINC is using, GPU Grid is using 8.29 GB, even when it is not running anything.

If I allow all my spare space to be used I would just have enough disk space for GPUGrid to run (maybe), however I don't intend to give all that space to BOINC so I can't download and run some of these work units.

If they are 6GB then there is no problem.

Why does GPUGrid need over 8 GB of disk space just to hold the project files?
(I have another computer that is showing the same amount of used disk space so this is normal amount used by the project but Why?

(My other computer has a much larger Disk so is not having the same issues).

Conan

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50316 - Posted: 28 Aug 2018 | 13:33:33 UTC - in response to Message 50314.

@Conan: the QM calculations need to store lots of data in memory for best performance. Since we cannot ask for 20GB of RAM the software instead writes any amount of calculation data that exceeds the RAM limit (4GB) to the hard drive.

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 50317 - Posted: 28 Aug 2018 | 13:48:55 UTC - in response to Message 50316.

The current "disk limit" for CPU jobs is set at 20 GB. This is a ballpark estimate to accommodate both the software and libraries (largish by themselves) and the temporary (scratch) data.

The software is reused between WUs, but you can reclaim the space by resetting the project. The scratch space is only occupied when a WU is running (or paused).

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 50318 - Posted: 28 Aug 2018 | 13:49:59 UTC - in response to Message 50309.

new WUs don't seem to work: they consume a lot of memory, throw computation errors or just rest at 10% progress forever.


On your failures I see "connection errors". Could be firewall filtering, or the like.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50323 - Posted: 28 Aug 2018 | 17:10:17 UTC

First SELE task done by my Old Faithful Opteron 1210 running SuSE Linux Leap 42.3.
Tullio

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50335 - Posted: 30 Aug 2018 | 8:23:47 UTC

I have a funny SELE task on my Linux laptop. It is stuck at 10% after 14 hours 38 min, but the remaining estimated time is rising to more than 5 days. All seems normal by the "top" command and it has lots of disk space.
Tullio
____________

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 99
Credit: 252,641
RAC: 344
Level

Scientific publications
wat
Message 50336 - Posted: 30 Aug 2018 | 9:30:10 UTC - in response to Message 50318.

new WUs don't seem to work: they consume a lot of memory, throw computation errors or just rest at 10% progress forever.


On your failures I see "connection errors". Could be firewall filtering, or the like.


No firewall here.
And same problem.

Thomas
Send message
Joined: 23 Feb 17
Posts: 15
Credit: 267,820,364
RAC: 123,382
Level
Asn
Scientific publications
wat
Message 50337 - Posted: 30 Aug 2018 | 10:29:37 UTC - in response to Message 50336.

As said, those WUs do not work properly. I am away for another project and come back, if they are fixed.

Profile Conan
Send message
Joined: 25 Mar 09
Posts: 24
Credit: 427,321
RAC: 522
Level

Scientific publications
wat
Message 50343 - Posted: 30 Aug 2018 | 12:32:31 UTC
Last modified: 30 Aug 2018 | 12:32:58 UTC

OK, thanks Toni and Stefan for the information, that explains a lot.

I will run what I can.

Thanks again
Conan

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50347 - Posted: 30 Aug 2018 | 13:41:55 UTC
Last modified: 30 Aug 2018 | 13:42:26 UTC

In the slot of a running task there is an output directory which leads to a report of what the program is doing in physical terms. Maybe some explanation by the admins would be welcome.
Tullio
____________

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50353 - Posted: 31 Aug 2018 | 8:19:05 UTC

We investigated another algorithm which doesn't use scratch disk space. Unfortunately on my test it was 13x slower than the one that uses disk (25 minutes became 5:30 hours).
So it is not a realistic choice for us. After this batch of simulations I will probably have to submit more which will use more scratch disk up to 30GB so I assume we are going to fill up some disks.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50354 - Posted: 31 Aug 2018 | 8:31:45 UTC - in response to Message 50353.
Last modified: 31 Aug 2018 | 8:47:07 UTC

I got plenty of disk space on my two Linux boxes because the slots directory is in my /home/user partition,which has more than 700 GB on my SuSE Linux Leap 42.3 and Leap 15.0 OS. What amazes me is that QC tasks are always stuck at 10% progress while GPU tasks show progress as increasing.
Tullio
____________

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 50355 - Posted: 31 Aug 2018 | 10:33:02 UTC - in response to Message 50354.
Last modified: 31 Aug 2018 | 10:34:25 UTC

I got plenty of disk space on my two Linux boxes because the slots directory is in my /home/user partition,which has more than 700 GB on my SuSE Linux Leap 42.3 and Leap 15.0 OS. What amazes me is that QC tasks are always stuck at 10% progress while GPU tasks show progress as increasing.
Tullio


The 10% progress is explained as follows: updating (if necessary) the app is 10%, and usually happens immediately. The remaining 90% advances when molecules are calculated (e.g. 5 molecules = 90%/5 increments). However very big WUs have only one molecule, so no apparent progress until the end. (We have no finer grain progress).

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50389 - Posted: 4 Sep 2018 | 12:27:11 UTC - in response to Message 50355.

I got plenty of disk space on my two Linux boxes because the slots directory is in my /home/user partition,which has more than 700 GB on my SuSE Linux Leap 42.3 and Leap 15.0 OS. What amazes me is that QC tasks are always stuck at 10% progress while GPU tasks show progress as increasing.
Tullio


The 10% progress is explained as follows: updating (if necessary) the app is 10%, and usually happens immediately. The remaining 90% advances when molecules are calculated (e.g. 5 molecules = 90%/5 increments). However very big WUs have only one molecule, so no apparent progress until the end. (We have no finer grain progress).


So how much space do these WUs need? I'm running 12 at a time with 64 GB of RAM, but no swap space. I see that not all 48 threads are at 100%, I'm thinking it's the lack of swap.
____________

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50390 - Posted: 4 Sep 2018 | 12:45:44 UTC - in response to Message 50389.

In the old UNIX days a rule of thumb was that you needed a swap space twice the RAM, which was usually small.Now RAM is plenty. I got 22 GB RAM on the Windows 10 PC, and 8 GB RAM on each Linux box. GGPUGRID CPU tasks use some swap but most is not used.
tullio

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50391 - Posted: 4 Sep 2018 | 13:01:41 UTC

I amped the swap to 300GB, but it only seems to be using RAM. Is this "scratch space" used in swap space or does the WU use the file directory for storage? I'm thinking it is the latter since the BOINC space usage goes up and down.

Thing is my install directory is only 120GB...

I also have this feeling that now I have 300GB of swap space for nothing lol. I am not a smart man.
____________

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50392 - Posted: 4 Sep 2018 | 13:19:06 UTC - in response to Message 50391.

I see temporary files in the slots/0 directory They are named psi.25019.number
Tullio

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50393 - Posted: 4 Sep 2018 | 15:59:03 UTC
Last modified: 4 Sep 2018 | 15:59:17 UTC

Yes afaik it doesn't use swap space, so increasing that will not help. It's probably where Tullio mentioned. The files are called `psi.XXXXX.XX`. Usually there are two and the second can grow significantly.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50394 - Posted: 4 Sep 2018 | 16:20:32 UTC - in response to Message 50393.

Stefan, I see 4 plus one which says psi.30091.clean
Tullio

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50397 - Posted: 5 Sep 2018 | 11:34:11 UTC

Im fixing an issue with SELE2 so I cancelled them and will send out SELE3 in a bit

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 883
Credit: 1,734,625,070
RAC: 1,199,858
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50398 - Posted: 5 Sep 2018 | 12:17:35 UTC - in response to Message 50397.

Im fixing an issue with SELE2 so I cancelled them and will send out SELE3 in a bit

Is this related to the 'upload failure - file size too big' problem reported for SELE2 last week? Whether or not, please double-check the <max_nbytes> value for the new batch.

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50399 - Posted: 5 Sep 2018 | 12:47:46 UTC

I had to add WCG to this 48-thread beast because it isn't using all of the threads @ 100% when running GPUGRID only. I'd wager it's because of scratch space bottleneck (it's running a SSD tho, 200 MB/s according to hdparm)... ?
____________

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50400 - Posted: 5 Sep 2018 | 12:52:31 UTC

No, the issue was with an old version of psi4 giving wrong results on large molecules when using the scratch space. This is fixed in the latest version now.

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50405 - Posted: 5 Sep 2018 | 14:55:45 UTC - in response to Message 50400.

No, the issue was with an old version of psi4 giving wrong results on large molecules when using the scratch space. This is fixed in the latest version now.


I'll set WCG to don't allow new work and I'll report back!
____________

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50406 - Posted: 5 Sep 2018 | 16:58:23 UTC

I am running 3.31 SELE6.
Tullio

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50408 - Posted: 5 Sep 2018 | 17:55:26 UTC

Something is wrong. The BOINC Manager says it is running but python does not appear in the "top" console.

____________

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50409 - Posted: 5 Sep 2018 | 18:23:11 UTC - in response to Message 50408.

I just had about 20 of these fly through before they corrected and started to run correctly.

<core_client_version>7.8.3</core_client_version>
<![CDATA[
<message>
process exited with code 195 (0xc3, -61)</message>
<stderr_txt>
10:09:04 (14352): wrapper (7.7.26016): starting
10:09:04 (14352): wrapper (7.7.26016): starting
10:09:04 (14352): wrapper: running /usr/bin/flock (/home/zalster/Desktop/BOINC/projects/www.gpugrid.net/miniconda.lock -c "/bin/bash ./miniconda-installer.sh -b -u -p /home/zalster/Desktop/BOINC/projects/www.gpugrid.net/miniconda &&
/home/zalster/Desktop/BOINC/projects/www.gpugrid.net/miniconda/bin/conda install -m -y -n qmml2 --override-channels -c defaults -c gpugrid --file requirements.txt ")
Python 3.6.5 :: Anaconda, Inc.

PackagesNotFoundError: The following packages are not available from current channels:

- psi4==1.2.1

Current channels:

- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/free/linux-64
- https://repo.anaconda.com/pkgs/free/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/pro/linux-64
- https://repo.anaconda.com/pkgs/pro/noarch
- https://conda.anaconda.org/gpugrid/linux-64
- https://conda.anaconda.org/gpugrid/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

https://anaconda.org

and use the search bar at the top of the page.


10:09:21 (14352): /usr/bin/flock exited; CPU time 11.828434
10:09:21 (14352): app exit status: 0x1
10:09:21 (14352): called boinc_finish(195)

</stderr_txt>
]]>

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50413 - Posted: 6 Sep 2018 | 7:28:28 UTC - in response to Message 50409.

Yes we had to do some testing with SELE3-5. SELE6 ought to work fine though. 1741/88 success/fail ratio

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 740
Credit: 4,285,282
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 50416 - Posted: 6 Sep 2018 | 8:45:49 UTC - in response to Message 50413.

Things seem rather stable for SELE6. For further discussion let's please go to the multicore forum.

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 399
Credit: 2,743,869,442
RAC: 1,108,426
Level
Phe
Scientific publications
watwat
Message 50428 - Posted: 7 Sep 2018 | 12:38:28 UTC - in response to Message 49842.
Last modified: 7 Sep 2018 | 13:13:59 UTC

I'm trying out the AMD EPYC trial from Packet, it runs 48 QM WUs at a time... all valid.
To everybody using hyper-threaded CPUs for crunching:
You should test how well the given app scales with HT on or off on your system. The other approach is leave HT on, but lower the percentage of the usable CPUs in BOINC manager (down to 50%). Too many simultaneous memory intensive apps would cause too many cache misses, resulting in degraded combined performance. With HT off (or by setting the usable CPUs to 50%) calculation time should be halved (due that two threads have one FPU). If it's more than a half, then the number of usable CPUs could be increased, while the RAC has risen accordingly (= in a direct ratio).
I can't test it myself until the Windows app has been released, but I'm interested.
A simultaneous GPU task also could degrade the performance of the CPU tasks and vice versa.


Zoltan, I think you have a great point here. I am noticing much higher CPU utilization and half of the RAM usage when I switched to 50% CPU in BOINC on these new QC WUs. I think it's mostly to due to the much lower Hard drive bandwidth required and perhaps also the cache on the CPU is more efficiently allocated.

Profile Chilean
Avatar
Send message
Joined: 8 Oct 12
Posts: 86
Credit: 156,476,155
RAC: 474,306
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwat
Message 50429 - Posted: 7 Sep 2018 | 13:31:31 UTC - in response to Message 50428.

I'm trying out the AMD EPYC trial from Packet, it runs 48 QM WUs at a time... all valid.
To everybody using hyper-threaded CPUs for crunching:
You should test how well the given app scales with HT on or off on your system. The other approach is leave HT on, but lower the percentage of the usable CPUs in BOINC manager (down to 50%). Too many simultaneous memory intensive apps would cause too many cache misses, resulting in degraded combined performance. With HT off (or by setting the usable CPUs to 50%) calculation time should be halved (due that two threads have one FPU). If it's more than a half, then the number of usable CPUs could be increased, while the RAC has risen accordingly (= in a direct ratio).
I can't test it myself until the Windows app has been released, but I'm interested.
A simultaneous GPU task also could degrade the performance of the CPU tasks and vice versa.


Zoltan, I think you have a great point here. I am noticing much higher CPU utilization and half of the RAM usage when I switched to 50% CPU in BOINC on these new QC WUs. I think it's mostly to due to the much lower Hard drive bandwidth required and perhaps also the cache on the CPU is more efficiently allocated.


Yup, I added Rosetta and WCG to the mix and the few GPUGRID WU run constantly @ 400%
____________

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 399
Credit: 2,743,869,442
RAC: 1,108,426
Level
Phe
Scientific publications
watwat
Message 50435 - Posted: 8 Sep 2018 | 3:16:17 UTC

Do you have any tips for getting higher utilization out of these new large molecule QC WUs? I am already running 4 WUs on a 16 core system which is 50% usage in BOINC but the utilization is all over the place. It's using up to 23gb of ram (I have 32gb) with only 4 WUs and I have plenty of space on the SSD.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50488 - Posted: 13 Sep 2018 | 5:41:11 UTC
Last modified: 13 Sep 2018 | 5:42:07 UTC

CPU tasks - unsent: 44,723; in progress: 848; users in last 24hrs: 76


Quantum Chemistry unsent: 13,191 in progress: 866

Looks like we are cutting that number down to size quickly...
____________

PappaLitto
Send message
Joined: 21 Mar 16
Posts: 399
Credit: 2,743,869,442
RAC: 1,108,426
Level
Phe
Scientific publications
watwat
Message 50518 - Posted: 15 Sep 2018 | 13:21:38 UTC

QC WUs are almost out, less than 600 to send out.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 616
Credit: 1,199,706,322
RAC: 89,224
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 50519 - Posted: 15 Sep 2018 | 18:36:11 UTC - in response to Message 50518.

They may be waiting until the 3.31 jobs finish before introducing the new 3.32 version. I expect they have plenty more.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50521 - Posted: 15 Sep 2018 | 20:03:22 UTC
Last modified: 15 Sep 2018 | 20:03:51 UTC

Hopefully. We are officially out of cpu work.
____________

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50528 - Posted: 17 Sep 2018 | 14:13:58 UTC

I am running two resends. One of them failed with "file too big" error. The other is running.
Tullio

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50529 - Posted: 17 Sep 2018 | 15:43:45 UTC

I submitted some WUs but I am warning you :P This batch will use lots of scratch space.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 616
Credit: 1,199,706,322
RAC: 89,224
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 50530 - Posted: 17 Sep 2018 | 15:56:54 UTC - in response to Message 50529.

This batch will use lots of scratch space.

I am set up to run four work units at a time. How much will that need? I can change it as necessary; it is "only" a 120 GB SSD, with maybe 80 GB free at the moment.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50531 - Posted: 17 Sep 2018 | 16:21:15 UTC - in response to Message 50530.
Last modified: 17 Sep 2018 | 16:21:48 UTC

I think the largest one took 50GB of scratch space. But they should scale linearly (they are not all that big) so it's practically up to chance if you will be able to run them all in parallel depending on if you get some of the smaller ones or the larger ones.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 616
Credit: 1,199,706,322
RAC: 89,224
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 50532 - Posted: 17 Sep 2018 | 17:39:42 UTC - in response to Message 50531.

OK, the 250 GB SSDs are a good buy at the moment in the U.S.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50538 - Posted: 17 Sep 2018 | 21:47:02 UTC - in response to Message 50529.

I submitted some WUs but I am warning you :P This batch will use lots of scratch space.


Most errors I get from these work units have this

</stderr_txt>
<message>
upload failure: <file_xfer_error>
<file_name>3320_19_21_22_23_e3641c15_n00001-SDOERR_SELE6-0-1-RND3909_0_1</file_name>
<error_code>-131 (file size too big)</error_code>
</file_xfer_error>

</message>

____________

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50544 - Posted: 18 Sep 2018 | 15:08:13 UTC

Two have completed on my Linux box.
Tullio

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50548 - Posted: 18 Sep 2018 | 23:57:57 UTC

One more task gained 1848.00 credits.
Tullio

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50549 - Posted: 19 Sep 2018 | 0:51:19 UTC - in response to Message 50548.

One more task gained 1848.00 credits.
Tullio


9 1/2 hours? that's longer than most Long Run GPUs tasks ;)

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50553 - Posted: 19 Sep 2018 | 8:19:29 UTC

Again,1526.20 credits.
Tullio

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50565 - Posted: 20 Sep 2018 | 0:58:23 UTC - in response to Message 50553.

Again,1526.20 credits.
Tullio

Starting to see some of the longer run work units

Run time - CPU time - Credit
3,812.94 - 11,088.78 - 2,307.16
4,054.73 - 11,758.62 - 2,637.67
____________

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50567 - Posted: 20 Sep 2018 | 7:16:07 UTC

Longer runs don't seem to be affected by the DISK_LIMIT_EXCEEDED error which happens in some shorter runs.My later long run gave me 1151 credits.
Tullio
____________

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50569 - Posted: 20 Sep 2018 | 10:14:10 UTC

I am submitting now some more of the faster QMML50_3 workunits. These should be quite quick and have a higher priority than the SELE6 so you might be getting these for a while now.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50570 - Posted: 20 Sep 2018 | 13:56:08 UTC

Run time 3,241.13
CPU time 4,048.97
Validate state Valid
Credit 157.51

Erich56
Send message
Joined: 1 Jan 15
Posts: 471
Credit: 2,331,035,852
RAC: 1,929,062
Level
Phe
Scientific publications
watwatwatwat
Message 50571 - Posted: 20 Sep 2018 | 15:38:19 UTC - in response to Message 50570.

Run time 3,241.13
CPU time 4,048.97
Validate state Valid
Credit 157.51

Hm, a marked drop in the credit, compared to what Zalster got (see a few postings above):

Run time - CPU time - Credit
3,812.94 - 11,088.78 - 2,307.16
4,054.73 - 11,758.62 - 2,637.67


tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50572 - Posted: 20 Sep 2018 | 15:53:22 UTC - in response to Message 50571.

Yes, but his CPU time is much higher, probably because of the core number he has. I have only two.
Tullio

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50573 - Posted: 20 Sep 2018 | 16:45:19 UTC - in response to Message 50572.

Yes, but his CPU time is much higher, probably because of the core number he has. I have only two.
Tullio


Couple of things I've noticed. The computer that are getting the higher time/credit only has 12 threads. My 10core/20thread is still getting the shorter, quicker work units. Not sure why.

Also, most of the long run are resends that erred out on other computers. Maybe it was the disk space as the issue, don't know. Just thought I would point that out as well.

tullio
Send message
Joined: 8 May 18
Posts: 113
Credit: 9,366,750
RAC: 112,436
Level
Ser
Scientific publications
wat
Message 50574 - Posted: 20 Sep 2018 | 17:29:21 UTC
Last modified: 20 Sep 2018 | 17:41:58 UTC

My Linux HP laptop cannot get GPUGRID tasks because it has only 24.90 GB available and the server says it needs 32. So I am running SETI@home on it, which does not require that much space.

Stefan
Volunteer moderator
Project developer
Project scientist
Send message
Joined: 5 Mar 13
Posts: 317
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 50575 - Posted: 20 Sep 2018 | 19:46:13 UTC

I have no clue how the BOINC scheduler works but if it works as I hope it works you should be getting only the QMML50 workunits for a while now. Maybe some SELE6 were still scheduled from before.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 64
Credit: 2,075,986,410
RAC: 6,738,113
Level
Phe
Scientific publications
watwatwat
Message 50578 - Posted: 21 Sep 2018 | 0:37:49 UTC - in response to Message 50575.

Yes the new QMML50s are flowing. Running between 2-4 minutes currently. Will keep an eye on them.
____________

Post to thread

Message boards : News : More CPU jobs