Advanced search

Message boards : News : Large scale experiment: MDAD

Author Message
Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53462 - Posted: 25 Jan 2020 | 10:07:33 UTC
Last modified: 25 Jan 2020 | 10:08:22 UTC

We are starting a new large-scale experiment. There will be plenty of workunits, whose very first batch is currently being sent. Run times should be around 6h but with a lot of variability. They are very heterogeneous so please don't worry for failures.

Thanks! -Toni

Nick Name
Send message
Joined: 3 Sep 13
Posts: 53
Credit: 1,533,531,731
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53465 - Posted: 25 Jan 2020 | 15:08:17 UTC - in response to Message 53462.

These are running just fine on both Windows and Linux so far. I haven't seen any run times near six hours yet. I also see that the Linux app is loading the GPU much higher than the Windows app is, about double.
____________
Team USA forum | Team USA page
Join us and #crunchforcures. We are now also folding:join team ID 236370!

Azmodes
Send message
Joined: 7 Jan 17
Posts: 34
Credit: 1,371,429,518
RAC: 0
Level
Met
Scientific publications
watwatwat
Message 53466 - Posted: 25 Jan 2020 | 15:12:36 UTC

Almost 20 tasks validated so far, but I have also had two WUs end in an error after a few seconds, on two different hosts so far:

<core_client_version>7.9.3</core_client_version>
<![CDATA[
<message>
process exited with code 195 (0xc3, -61)</message>
<stderr_txt>
12:23:14 (10594): wrapper (7.7.26016): starting
12:23:14 (10594): wrapper (7.7.26016): starting
12:23:14 (10594): wrapper: running acemd3 (--boinc input --device 1)
ERROR: /home/user/conda/conda-bld/acemd3_1570536635323/work/src/mdsim/forcefield.cpp line 174: Cannot index the parameter files with the topology file
12:23:15 (10594): acemd3 exited; CPU time 0.081577
12:23:15 (10594): app exit status: 0x9e
12:23:15 (10594): called boinc_finish(195)

</stderr_txt>
]]>


Also, while my Linux machines get a GPU core load of 90-100%, the Windows ones aren't doing so great (one thread is set aside for each task in the client and swan_sync is on). Sub-90, sometimes around 80 and the worst I've seen is an RTX 2080 at 70% max.

STARBASEn
Avatar
Send message
Joined: 17 Feb 09
Posts: 91
Credit: 1,603,303,394
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 53467 - Posted: 25 Jan 2020 | 16:01:24 UTC

Cool, got all 3 Linux NV cards happily crunching away at about 95% gpu usage.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 566
Credit: 5,845,077,024
RAC: 12,896,769
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53469 - Posted: 25 Jan 2020 | 16:28:16 UTC - in response to Message 53462.

I'm glad to process Science again! (please, note capital letter for this)
Thank so much to all GPUGrid's Team (please, see above)

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53471 - Posted: 25 Jan 2020 | 17:21:57 UTC

What is the object of the research?
____________
John

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53472 - Posted: 25 Jan 2020 | 18:29:14 UTC

Toni, Glad to get the work.
Is there any plan to upgrade your server or internet speed???

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53473 - Posted: 25 Jan 2020 | 18:31:24 UTC

Toni, Glad to get the work.
Is there any plan to upgrade your server or internet speed???

Remanco
Send message
Joined: 4 Mar 13
Posts: 3
Credit: 30,169,077
RAC: 0
Level
Val
Scientific publications
watwatwat
Message 53478 - Posted: 25 Jan 2020 | 21:12:51 UTC - in response to Message 53471.

What is the object of the research?


Yes, can we have a bit more info on what we crunch?

Thanks!

Sylvain


Miklos M.
Send message
Joined: 16 Jun 12
Posts: 17
Credit: 170,413,806
RAC: 0
Level
Ile
Scientific publications
watwatwatwat
Message 53480 - Posted: 25 Jan 2020 | 22:47:50 UTC

Trying to get some tasks and so far no luck. Am I doing it wrong?

Thanks

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53481 - Posted: 25 Jan 2020 | 22:49:24 UTC - in response to Message 53480.

Trying to get some tasks and so far no luck. Am I doing it wrong?

Thanks

Do you have acemd3 application selected in your project preferences?

Nick Name
Send message
Joined: 3 Sep 13
Posts: 53
Credit: 1,533,531,731
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53482 - Posted: 25 Jan 2020 | 23:13:09 UTC

Out of work already! LOL
____________
Team USA forum | Team USA page
Join us and #crunchforcures. We are now also folding:join team ID 236370!

davidBAM
Send message
Joined: 17 Sep 18
Posts: 11
Credit: 695,185,729
RAC: 0
Level
Lys
Scientific publications
watwatwat
Message 53486 - Posted: 25 Jan 2020 | 23:50:02 UTC

Is the policy still to reduce credits on work not uploaded within 24hrs of issue?

Killersocke
Send message
Joined: 18 Oct 13
Posts: 53
Credit: 406,647,419
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53488 - Posted: 26 Jan 2020 | 0:13:25 UTC - in response to Message 53482.

Out of work already! LOL

+ 1

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53489 - Posted: 26 Jan 2020 | 0:47:15 UTC - in response to Message 53482.

Out of work already! LOL
I think it was just the warm-up. Every batch of Toni queued yesterday consisted only a single step, it's no wonder that they didn't last longer.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53490 - Posted: 26 Jan 2020 | 1:31:46 UTC - in response to Message 53486.

Is the policy still to reduce credits on work not uploaded within 24hrs of issue?
Yes. But it's actually a +50% bonus for less than 24h, or +25% for less than 48h.

davidBAM
Send message
Joined: 17 Sep 18
Posts: 11
Credit: 695,185,729
RAC: 0
Level
Lys
Scientific publications
watwatwat
Message 53491 - Posted: 26 Jan 2020 | 1:44:50 UTC - in response to Message 53490.

Thank you. Great job on the new WU incidentally.

As they are much shorter, could I ask that you please allow download of more than 2 WU per GPU

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53492 - Posted: 26 Jan 2020 | 3:05:17 UTC

I believe the limit is 16 per host. That is what I got on my 3 hosts. After that I received the "you have reached the limit of tasks in progress message"

davidBAM
Send message
Joined: 17 Sep 18
Posts: 11
Credit: 695,185,729
RAC: 0
Level
Lys
Scientific publications
watwatwat
Message 53493 - Posted: 26 Jan 2020 | 4:01:18 UTC - in response to Message 53492.

Thank you. Perhaps I'll see that once WU become freely available :-)

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53495 - Posted: 26 Jan 2020 | 6:19:53 UTC - in response to Message 53492.

As they are much shorter, could I ask that you please allow download of more than 2 WU per GPU

I believe the limit is 16 per host. That is what I got on my 3 hosts. After that I received the "you have reached the limit of tasks in progress message"

I guess what was meant in the first above cited posting was to increase the limit of tasks per GPU that can be downloaded at a time.
So far, this figure was (and still seems to be) 2.

When talking about 16 tasks per host (in the second of the above postings), I guess this was the total number of tasks that were downloaded NOT at a time, but within a certain time frame yesterday, provided a given GPU was fast enough.
My various hosts got only up to about 10 tasks each, and that was it. No more downloads since late night.



Trotador
Send message
Joined: 25 Mar 12
Posts: 103
Credit: 9,769,314,893
RAC: 87,581
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53496 - Posted: 26 Jan 2020 | 8:30:54 UTC

An important issue I've noted after crunching these GPUGrid units in my Ubuntu 16.04 hosts, not in 18.04 ones, is that the rest of BOINC GPU projects (and folding#home) fail with error when trying to crunch. I tested with Amicable, Einstein and FAH.

I've had to reinstall NVIDIA drivers and restart to get things working again. A matter of libraries and links I guess.

adrianxw
Send message
Joined: 13 Apr 18
Posts: 2
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 53498 - Posted: 26 Jan 2020 | 9:53:56 UTC
Last modified: 26 Jan 2020 | 10:20:32 UTC

I added GPUGrid to the projects list on one of my machines two years ago, I've never received a work unit. Other GPU projects are not having any trouble. Removed now.
____________

biodoc
Send message
Joined: 26 Aug 08
Posts: 183
Credit: 6,466,114,375
RAC: 1,393,151
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53499 - Posted: 26 Jan 2020 | 10:26:48 UTC - in response to Message 53498.

I added GPUGrid to the projects list on one of my machines two years ago, I've never received a work unit. Other GPU projects are not having any trouble. Removed now.


I checked your computer and it appears it has an AMD GPU which is not supported. Only Nvidia cards are supported. Here's the FAQ for the new app:

http://www.gpugrid.net/forum_thread.php?id=5002#52865

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 477,412
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53500 - Posted: 26 Jan 2020 | 11:02:10 UTC

Some years ago there was a AMD application and is still possible to check the box for AMD wu's in the GPUGRID preferences.
Maybe there will be less confusion if this check-box is removed ..

biodoc
Send message
Joined: 26 Aug 08
Posts: 183
Credit: 6,466,114,375
RAC: 1,393,151
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53501 - Posted: 26 Jan 2020 | 11:28:43 UTC - in response to Message 53492.

I believe the limit is 16 per host. That is what I got on my 3 hosts. After that I received the "you have reached the limit of tasks in progress message"


The limit is 2 per GPU. I see your computers are set up to run Seti, where it is common to "spoof" the server into "thinking" you have 32 coprocessors/GPUs per rig.

adrianxw
Send message
Joined: 13 Apr 18
Posts: 2
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 53502 - Posted: 26 Jan 2020 | 13:18:11 UTC - in response to Message 53499.
Last modified: 26 Jan 2020 | 13:20:36 UTC

It is a little ironic that a project specially for GPU's supports less GPU's than other projects. Einstein, Milky Way, Seti, etc. no problem.
____________

Profile BladeD
Send message
Joined: 1 May 11
Posts: 9
Credit: 144,358,529
RAC: 0
Level
Cys
Scientific publications
watwatwat
Message 53503 - Posted: 26 Jan 2020 | 15:16:00 UTC

Any ideas when new workunits will be release?
____________

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 477,412
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53504 - Posted: 26 Jan 2020 | 15:50:44 UTC

I see your computers are set up to run Seti, where it is common to "spoof" the server into "thinking" you have 32 coprocessors/GPUs per rig.

Tell me more !
Seti has the problem of not beeing always available and not having always wu's available, but the allowed runtime is quite long. So it makes sense to have a larger buffer, but this should only affect the Seti wu's.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53505 - Posted: 26 Jan 2020 | 17:07:46 UTC - in response to Message 53501.
Last modified: 26 Jan 2020 | 17:12:20 UTC

I believe the limit is 16 per host. That is what I got on my 3 hosts. After that I received the "you have reached the limit of tasks in progress message"


The limit is 2 per GPU. I see your computers are set up to run Seti, where it is common to "spoof" the server into "thinking" you have 32 coprocessors/GPUs per rig.

I didn't think that was the issue. I never received more than two tasks per gpu on the previous run of work units.

It depends on the project whether they recognize the spoofed gpus. Seti does and why I use it to keep the gpus fed during the ever longer Seti outages.

It may be that this run of work did recognize the spoofed gpus. But the math doesn't add up for the 4 hosts. Each host got 16 WU's. I have three 3 card hosts and one 4 card host. One 3 card host got nothing because it primarily is an Einstein machine and I got nothing but gpu cache is full for a GPUGrid request.

Except for the Einstein host, all the other hosts are spoofed with either 21 or 32 gpus. By your math I should have only received 8 tasks on the 4 card host or 64 tasks. I did neither. It appears to have been fixed at 16 for each host. As I returned work, I kept getting my cache refilled to a 16 count for each host. I figured that was more likely from my global cache setting.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53506 - Posted: 26 Jan 2020 | 17:10:20 UTC - in response to Message 53504.

I see your computers are set up to run Seti, where it is common to "spoof" the server into "thinking" you have 32 coprocessors/GPUs per rig.

Tell me more !
Seti has the problem of not beeing always available and not having always wu's available, but the allowed runtime is quite long. So it makes sense to have a larger buffer, but this should only affect the Seti wu's.

The coproc_info.xml file that is created by the client controls the number of gpus detected.

Manipulate that file and you can tell BOINC that you have as many as 64 gpus. But you can't exceed 64 as that is a hard limit in the server side code.

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 477,412
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53509 - Posted: 26 Jan 2020 | 20:52:12 UTC

The coproc_info.xml file that is created by the client controls the number of gpus detected.

Got it. THX !

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53513 - Posted: 27 Jan 2020 | 8:37:25 UTC - in response to Message 53509.

This was the first piece of a larger batch of 14k WUs. It's (amazingly!) already complete. I'll need to process it to create new WUs. The purpose of the work is (broadly speaking) methods development, i.e. build a dataset to improve the foundation of future MD-based research (not just GPUGRID). More details may come if it works ;)

Thanks to everybody for contributing. Also special thanks to those taking care of providing answers to BOINC details.


Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53515 - Posted: 27 Jan 2020 | 14:17:28 UTC

For a serial process like this the optimum would be to only send one WU per GPU.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53516 - Posted: 27 Jan 2020 | 15:29:02 UTC - in response to Message 53515.

For a serial process like this the optimum would be to only send one WU per GPU.

not really; because what would happen then is that there always is some idle time between uploading/reporting the result of a task and downloading the next one.
Which means the GPU cools off for a (short) while and heats up once the new task starts being cruched.
If this happens several time per day, over a lenghty period of time, this so-called "thermal cycle" definitely shortens the lifetime of the GPU.
Hence, it's definitely better to have another task already waiting to start immediately after the previous one gets finished.

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,181,686,293
RAC: 1,793,796
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53517 - Posted: 27 Jan 2020 | 18:13:05 UTC - in response to Message 53516.

not really; because what would happen then is that there always is some idle time between uploading/reporting the result of a task and downloading the next one.
Which means the GPU cools off for a (short) while and heats up once the new task starts being cruched.
If this happens several time per day, over a lenghty period of time, this so-called "thermal cycle" definitely shortens the lifetime of the GPU.
Hence, it's definitely better to have another task already waiting to start immediately after the previous one gets finished.

+1

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53519 - Posted: 28 Jan 2020 | 13:52:38 UTC - in response to Message 53516.

...the GPU cools off for a (short) while and heats up once the new task starts being cruched (sic).
If this happens several time per day, over a lengthy period of time, this so-called "thermal cycle" definitely shortens the lifetime of the GPU.
The degradation process for electronics is called electromigration. Flowing current while hot actually moves atoms. Where the conductors neck down, e.g. turning a sharp corner or going over bumps, the current density increases and hence the electromigration increases. This is an irreversible process that accelerates as the conductor chokes down and ultimately results in a broken line and failure.

Since GPUGrid is supply-limited one per GPU would assure that more hosts get a WU before hosts start getting additional WUs. Now that the WUs run in less than half the time two per GPU works well but folks still get left out.

The GPUGrid server is notoriously slow. If it were fast and they had over 10,000 WUs continuously available then one per GPU would be optimum.

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53520 - Posted: 28 Jan 2020 | 14:53:20 UTC - in response to Message 53506.



Manipulate that file and you can tell BOINC that you have as many as 64 gpus. But you can't exceed 64 as that is a hard limit in the server side code.

[/quote]

Please don't "fake" gpus as it will create WU "hoarding": it will deprive other users of work, and slow down our analysis (we sometimes have to wait for batches to be complete).

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53521 - Posted: 28 Jan 2020 | 15:51:21 UTC - in response to Message 53520.

Manipulate that file and you can tell BOINC that you have as many as 64 gpus. But you can't exceed 64 as that is a hard limit in the server side code.
Please don't "fake" gpus as it will create WU "hoarding": it will deprive other users of work, and slow down our analysis (we sometimes have to wait for batches to be complete).
Fortunately simple manipulation doesn't work, as this file is overwitten by the BOINC manager at startup.

pututu
Send message
Joined: 8 Oct 16
Posts: 14
Credit: 613,876,869
RAC: 547,063
Level
Lys
Scientific publications
watwatwatwat
Message 53522 - Posted: 28 Jan 2020 | 16:00:12 UTC - in response to Message 53521.

Manipulate that file and you can tell BOINC that you have as many as 64 gpus. But you can't exceed 64 as that is a hard limit in the server side code.
Please don't "fake" gpus as it will create WU "hoarding": it will deprive other users of work, and slow down our analysis (we sometimes have to wait for batches to be complete).
Fortunately simple manipulation doesn't work, as this file is overwitten by the BOINC manager at startup.

You can prevent the coproc file from been overwritten by BOINC.

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53525 - Posted: 28 Jan 2020 | 16:46:41 UTC - in response to Message 53522.
Last modified: 28 Jan 2020 | 16:47:17 UTC

Manipulate that file and you can tell BOINC that you have as many as 64 gpus. But you can't exceed 64 as that is a hard limit in the server side code.
Please don't "fake" gpus as it will create WU "hoarding": it will deprive other users of work, and slow down our analysis (we sometimes have to wait for batches to be complete).
Fortunately simple manipulation doesn't work, as this file is overwitten by the BOINC manager at startup.

You can prevent the coproc file from been overwritten by BOINC.


Which may explain tasks failing with

# Engine failed: Illegal value for DeviceIndex: 2

i.e. they attempt to run on non-existent gpus.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53526 - Posted: 28 Jan 2020 | 17:34:07 UTC - in response to Message 53519.

...the GPU cools off for a (short) while and heats up once the new task starts being cruched (sic).
If this happens several time per day, over a lengthy period of time, this so-called "thermal cycle" definitely shortens the lifetime of the GPU.
The degradation process for electronics is called electromigration. Flowing current while hot actually moves atoms. Where the conductors neck down, e.g. turning a sharp corner or going over bumps, the current density increases and hence the electromigration increases. This is an irreversible process that accelerates as the conductor chokes down and ultimately results in a broken line and failure.
Both of these present in a GPU (or any modern electronics made of chips).
The thermal cycle (a single period of thermal expansion and contraction) hurts the contact points (the ball grid soldering) between the chip's PCB and the card's PCB. It's most prominent for the GPU chip and the RAM chips on a GPU card. It's effect can be lessen by better cooling, lower power dissipation (=lower clock speeds and lower voltages), but most importantly stable working temperatures (of the chip itself). No idling -> the chip stays hot -> no thermal contraction -> no thermal cycle.
Electromigration can be lessen by lower currents which is the result of lower voltages and lower frequency. It can be prevented by not using the chip at all, but we're here for using our chips all the time as fast as possible, so we can't or won't do anything to lessen electromigration.
Intel had a problem with that a couple of years ago (IIRC the SATA3 controller of the 6th south bridge chip could fail to go that fast before their planned lifetime).
Electromigration is one of the practical reasons for the limit of the minimum size for a transistor inside the chip. The present size of these basic elements are very close to their practical minimum, so it's getting harder to shrink their size (= to make the fabrication process profitable). The other limit of the minimum size is theoretical, as (according to quantum mechanic) a bunch of silicone (+ doping) atoms simply won't work as a transistor.

Since GPUGrid is supply-limited one per GPU would assure that more hosts get a WU before hosts start getting additional WUs. Now that the WUs run in less than half the time two per GPU works well but folks still get left out.
The number of workunits per GPU depends on the ratio of the supply and the active hosts. One per GPU would be favorable for the present ratio, but when there's a lot of work queued then the 2 per GPU seems too low.
2 per GPU is a compromise, as the download / upload time could be significant (for example to upload the 138MB result file).

The GPUGrid server is notoriously slow. If it were fast and they had over 10,000 WUs continuously available then one per GPU would be optimum.
It's not just the speed. There's some DDOS prevention algorithm in operation, because my hosts gets blocked if they try to contact the server one by one in rapid succession (from the same public IP address).

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53527 - Posted: 28 Jan 2020 | 17:57:57 UTC - in response to Message 53525.

Manipulate that file and you can tell BOINC that you have as many as 64 gpus. But you can't exceed 64 as that is a hard limit in the server side code.
Please don't "fake" gpus as it will create WU "hoarding": it will deprive other users of work, and slow down our analysis (we sometimes have to wait for batches to be complete).
Fortunately simple manipulation doesn't work, as this file is overwitten by the BOINC manager at startup.
You can prevent the coproc file from been overwritten by BOINC.
Which may explain tasks failing with

# Engine failed: Illegal value for DeviceIndex: 2

i.e. they attempt to run on non-existent gpus.
It's a sign of that. So luckily it's not enough to prevent the BOINC manager to overwrite this file.
This is very counterproductive to use this method (for example to prevent running dry during a shortage / outage or a combined event). The users of this method don't care about their fellow (unaware) crunchers, as this method is directly aimed at them (not just the "precious" tasks on the server).

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53529 - Posted: 28 Jan 2020 | 18:26:28 UTC

1. Ban people who rig the system.

2. Electromigration is a very real problem, and they study it extensively. Before any new chip is ready for production, they have that ironed out. If the chip fails in the next 100 years, it probably won't be for that reason unless you abuse it by overclocking and overheating it excessively.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 53530 - Posted: 28 Jan 2020 | 19:03:50 UTC - in response to Message 53529.

1. Banning is a bit extreme. I think just asking people not to do it should be enough.

2. The spoofed client wasn't meant for GPUGrid. It was developed for Seti. It has no effect on Einstein@home and is surprising that it adversely affects the GPUGrid project. It wasn't meant to. As it was meant for a project with large pool of uncrunched data, it made sense to be able to download larger cache that ran in very short timeframes (42 secs average). It was not meant for GPUGrid where there is limited data that takes several hours to process. It is an unfortunately side effect of the spoofed client. It was not aimed at denying fellow crunchers access to data, if some people feel that way, it is unfortunate but not the intended purpose.

3. The comment that chips should last 100 years is an overstatement. Nothing last 100 years anymore. We are in a consumer driven economy. Meaning demand helps fuel our economy. If things lasted a hundred years, companies would go out of business as demand would drop off. More likely they are designed to last 3-5 years before they fail.

4. I agree it would be preferable to keep the chips warm and busy by having 1 extra task available so that there is little lag between switching so that voltages and temps don't fluctuate significantly over an extended period of time.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53531 - Posted: 28 Jan 2020 | 19:29:54 UTC - in response to Message 53530.

I agree it would be preferable to keep the chips warm and busy by having 1 extra task available so that there is little lag between switching so that voltages and temps don't fluctuate significantly over an extended period of time.

that's exactly what I said - so the 1 extra task should continue being provided, in any case.

(BTW, while there were no GPU tasks available during the past few days, I switched to Einstein - and these tasks showed a strange behaviour [as opposed to about a year ago]: for the first 80-100 seconds and the last 50-60 seconds of a task, only the CPU was crunching, NOT the GPU. Figuring that the tasks' lengh was about 12-14 minutes, the GPU was suffering a thermal cycle about 5 x per hour).

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,517,861,851
RAC: 8,636,022
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53532 - Posted: 28 Jan 2020 | 19:32:11 UTC - in response to Message 53530.

2. The spoofed client wasn't meant for GPUGrid. It was developed for Seti. It has no effect on Einstein@home and is surprising that it adversely affects the GPUGrid project.

I beg to disagree. I run the spoofed client, and I have it set to 'declare' 16 GPUs. At Einstein, it always fetches 16 tasks, even with a cache setting of 0.01 days + 0.01 days: BOINC automatically requests work to fill all apparently 'idle' devices.

The spoofing system works alongside the use of <max_concurrent> for the project, to ensure that tasks are never allocated to a GPU beyond the actual count of physical GPUs present - two in my case. Managed correctly, it should never permit BOINC to assign a task to an imaginary GPU - though I'm not sure how it would react if the configuration implied a limit of two Einstein tasks and two GPUGrid tasks. Best to think that one through very carefully.

I can see that allowing my machine to request 16 tasks from GPUGrid would be detrimental to this project's desire to have the fastest possible turnround.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53534 - Posted: 28 Jan 2020 | 20:20:59 UTC - in response to Message 53531.
Last modified: 28 Jan 2020 | 20:23:58 UTC

(BTW, while there were no GPU tasks available during the past few days, I switched to Einstein - and these tasks showed a strange behaviour [as opposed to about a year ago]: for the first 80-100 seconds and the last 50-60 seconds of a task, only the CPU was crunching, NOT the GPU. Figuring that the tasks' lengh was about 12-14 minutes, the GPU was suffering a thermal cycle about 5 x per hour).

The gravity wave work on Einstein involves a CPU preparation phase before the GPU gets involved. I have seen that on other projects as well. But if you are concerned about thermal cycles, what about the gamers? They would have destroyed their cards long before you.

It is not a problem. But if too many tricks are used to fix it, they will generate other problems.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53537 - Posted: 29 Jan 2020 | 5:56:23 UTC - in response to Message 53534.

... But if you are concerned about thermal cycles, what about the gamers? They would have destroyed their cards long before you.

that's what I have been thinking already.
On the other hand, games are not running 24/7.

Back to the current tasks: they were all used up during last night, so again no ones available for download :-(

Werkstatt
Send message
Joined: 23 May 09
Posts: 121
Credit: 321,525,386
RAC: 477,412
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53538 - Posted: 29 Jan 2020 | 8:15:31 UTC

(BTW, while there were no GPU tasks available during the past few days, I switched to Einstein - and these tasks showed a strange behaviour [as opposed to about a year ago]: for the first 80-100 seconds and the last 50-60 seconds of a task, only the CPU was crunching, NOT the GPU. Figuring that the tasks' lengh was about 12-14 minutes, the GPU was suffering a thermal cycle about 5 x per hour).

Einstein allows you a setup to run multiple wu's per gpu. Results in average gpu-usage of > 98%

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53539 - Posted: 29 Jan 2020 | 9:41:30 UTC - in response to Message 53520.

Toni wrote

Please don't "fake" gpus as it will create WU "hoarding": it will deprive other users of work, and slow down our analysis (we sometimes have to wait for batches to be complete).

a short look at the users list easily reveals some of the "faked" GPUs - their hosts show 48 GPUs per host(!) So no wonder that they download dozens of tasks at a time and are still processing these tasks long time after other users are through with the only 2 tasks their hosts could download.

This procedure is highly unfair, and GPUGRID should quickly develop steps against it.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53540 - Posted: 29 Jan 2020 | 9:45:39 UTC

It's the motherboard, it's always the motherboard. The MB is the most unreliable part of a computer. I have a stack of dead ones. I wish there was a MB designed specifically for distributed computing with no baby blinky lights and other excessive features etc.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53541 - Posted: 29 Jan 2020 | 10:16:10 UTC - in response to Message 53526.

It's not just the speed. There's some DDOS prevention algorithm in operation, because my hosts gets blocked if they try to contact the server one by one in rapid succession (from the same public IP address).
What can we do to mitigate this effect???

OAS: Many projects are adding a Max # WUs option in Preferences. Maybe add it with the choice of 1 or 2.

OAS: Bunkering for serial projects should be banned one way or another. These "races" and "sprints" have some folks requesting as many WUs per host as they can get but they don't get submitted to the work server until after the race start time, i.e. bunkering.

I triggered something a few days ago on GPUGrid that I've never seen before on a BOINC project. It was a fluke combination of things that had me upgrade my drivers but delayed a reboot. It wouldn't have bothered anything else but an unbeknownst slug of GPUGrid WUs had appeared. All those WUs had computation errors. Then both computers got banned with a Project Request. I thought it would be a 24-hour timeout I'd seen folks mention before but it persisted for days. After a few days I tried a manual Project Update and it started working again. Can this Project Requested Ban be applied to bunkerers???

Miklos M.
Send message
Joined: 16 Jun 12
Posts: 17
Credit: 170,413,806
RAC: 0
Level
Ile
Scientific publications
watwatwatwat
Message 53543 - Posted: 29 Jan 2020 | 12:49:06 UTC - in response to Message 53481.

Yes, Keith and by now I got 150 tasks, yesterday that is, but none this morning, so far.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53544 - Posted: 29 Jan 2020 | 16:21:09 UTC - in response to Message 53543.
Last modified: 29 Jan 2020 | 16:24:05 UTC

Yes, Keith and by now I got 150 tasks, yesterday that is, but none this morning, so far.

Good for you Miklos. And I see you have made the project happy by returning all within 24 hours.

Looks like Toni's comment about plenty of work forthcoming is true.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53545 - Posted: 29 Jan 2020 | 16:31:17 UTC - in response to Message 53539.
Last modified: 29 Jan 2020 | 16:32:34 UTC

Toni wrote
Please don't "fake" gpus as it will create WU "hoarding": it will deprive other users of work, and slow down our analysis (we sometimes have to wait for batches to be complete).

a short look at the users list easily reveals some of the "faked" GPUs - their hosts show 48 GPUs per host(!) So no wonder that they download dozens of tasks at a time and are still processing these tasks long time after other users are through with the only 2 tasks their hosts could download.

This procedure is highly unfair, and GPUGRID should quickly develop steps against it.
That's easy: limit the number of simultaneous tasks per host to 16.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53546 - Posted: 29 Jan 2020 | 16:54:57 UTC - in response to Message 53541.
Last modified: 29 Jan 2020 | 16:56:03 UTC

It's not just the speed. There's some DDOS prevention algorithm in operation, because my hosts gets blocked if they try to contact the server one by one in rapid succession (from the same public IP address).
What can we do to mitigate this effect???
There's no easy way to fix this in our end.

OAS: Bunkering for serial projects should be banned one way or another. These "races" and "sprints" have some folks requesting as many WUs per host as they can get but they don't get submitted to the work server until after the race start time, i.e. bunkering.
Agreed.

I triggered something a few days ago on GPUGrid that I've never seen before on a BOINC project. It was a fluke combination of things that had me upgrade my drivers but delayed a reboot. It wouldn't have bothered anything else but an unbeknownst slug of GPUGrid WUs had appeared. All those WUs had computation errors.
That's most probably because of the delayed reboot.

Then both computers got banned with a Project Request.
This "banning" is done by simply reducing the max task per day to 1, while the tasks done on that day is above 1, so the project won't send more work for that host on that day when the host asks for it. The next day the task done on that day starts from 0, so the project will send work to your host when it asks for it the next time.

I thought it would be a 24-hour timeout I'd seen folks mention before but it persisted for days.
That's because your BOINC manager entered an extended back-off of the GPUGrid project (because the project didn't send work to your host for several task requests). Perhaps other projects kept your host busy.

After a few days I tried a manual Project Update and it started working again.
That made the BOINC manager to ask GPUGrid for work, and because this request was successful, it ended the extended back-off.

Can this Project Requested Ban be applied to bunkerers???
No. (Probably you can see this by the order of the events by now.)

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53547 - Posted: 29 Jan 2020 | 17:11:29 UTC - in response to Message 53546.
Last modified: 29 Jan 2020 | 17:14:15 UTC

It's not just the speed. There's some DDOS prevention algorithm in operation, because my hosts gets blocked if they try to contact the server one by one in rapid succession (from the same public IP address).
What can we do to mitigate this effect???
There's no easy way to fix this in our end.
I was thinking about the range of "Store at least X days of work" and Resource Share values to avoid setting off the DDoS alarm.
I triggered something a few days ago on GPUGrid that I've never seen before on a BOINC project. It was a fluke combination of things that had me upgrade my drivers but delayed a reboot. It wouldn't have bothered anything else but an unbeknownst slug of GPUGrid WUs had appeared. All those WUs had computation errors.
That's most probably because of the delayed reboot.
The reboot delay was only 30 minutes or so. I was working on a non-BOINC project was not aware GG WUs had arrived so when they started to error out they went fast as GG server would send them. How was not the point, it was a fluke resulting from the feast or famine nature of GG.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53548 - Posted: 29 Jan 2020 | 17:43:12 UTC

That's easy: limit the number of simultaneous tasks per host to 16.

Which goes back to my original post in this thread. I think that is what they have done since the beginning of the new work generation.

I keep bumping up against that 16 task per host number. I turn tasks in and I get more, up to the the 16 count.

And the next scheduler connection 31 seconds later after refilling gets me:

Pipsqueek

70548 GPUGRID 1/29/2020 9:28:16 AM This computer has reached a limit on tasks in progress

As long a host turns in valid work and in a timely manner, I don't think any kind of new restriction is needed. The faster hosts get more work done for the project which should keep the scientists happy with the progress of their research.

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53549 - Posted: 29 Jan 2020 | 17:53:31 UTC - in response to Message 53548.

To come back on topic, there is a batch ("MDADpr1") of ~50k workunits being created. I hope it's correct.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,517,861,851
RAC: 8,636,022
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53550 - Posted: 29 Jan 2020 | 18:22:06 UTC - in response to Message 53549.

To come back on topic, there is a batch ("MDADpr1") of ~50k workunits being created. I hope it's correct.

I got 1a0aA00_320_1-TONI_MDADpr1-0-5-RND6201 over an hour ago, but no sign of any of the others.

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53551 - Posted: 29 Jan 2020 | 18:31:04 UTC - in response to Message 53550.

Actually they were only 500. Better this way - they came out too large. Feel free to abort them.

Zalster
Avatar
Send message
Joined: 26 Feb 14
Posts: 211
Credit: 4,496,324,562
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwat
Message 53552 - Posted: 29 Jan 2020 | 18:36:17 UTC - in response to Message 53551.

Actually they were only 500. Better this way - they came out too large. Feel free to abort them.


Had 6 of them, about 4500s-4900s into them when the server cancelled them.....

Now you have me curious as to how long they would have run....
____________

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53553 - Posted: 29 Jan 2020 | 18:37:48 UTC
Last modified: 29 Jan 2020 | 18:43:32 UTC

about an hour ago, I had two tasks (on two different hosts) that were "aborted by project" after about 5.900 seconds:

http://www.gpugrid.net/result.php?resultid=21644737
http://www.gpugrid.net/result.php?resultid=21644681

what happened?

edit: just now, two other ones like those mentioned in Toni's message

To come back on topic, there is a batch ("MDADpr1") of ~50k workunits being created. I hope it's correct.
were aborted by server, right after start.
What's wrong with them?

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53554 - Posted: 29 Jan 2020 | 18:38:17 UTC - in response to Message 53552.
Last modified: 29 Jan 2020 | 18:38:57 UTC

Ok, I did not know the server would cancel running WUs. Good to know. They would have run around 6h-ish, but I was not sure they wouldn't fail at the end due to large uploads.

The next test batch (MDADpr2) is out.

Profile BladeD
Send message
Joined: 1 May 11
Posts: 9
Credit: 144,358,529
RAC: 0
Level
Cys
Scientific publications
watwatwat
Message 53555 - Posted: 29 Jan 2020 | 18:58:51 UTC - in response to Message 53554.

Ok, I did not know the server would cancel running WUs. Good to know. They would have run around 6h-ish, but I was not sure they wouldn't fail at the end due to large uploads.

The next test batch (MDADpr2) is out.

Okay, glad to see that I have the good ones!
____________

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53558 - Posted: 30 Jan 2020 | 1:18:30 UTC - in response to Message 53548.
Last modified: 30 Jan 2020 | 1:26:02 UTC

As long a host turns in valid work and in a timely manner, I don't think any kind of new restriction is needed. The faster hosts get more work done for the project which should keep the scientists happy with the progress of their research.
GPUGrid differs from SETI@home in the way the progress of the research actually made by our computers, as for GPUGrid our hosts actually make the data to be analysed by the scientists, while SETI@home use pre-recorded data split into many small chunks to be processed by the hosts. At SETI@home the individual pieces can be processed independently, but at GPUGrid fresh workunits are generated from the result of the previous run. If your host grabs 64 workunits, but actually process only 1, then your host hinder the progress of the other 63 "chain of workunits". The more you grab the more delay you put into the progress of the ongoing MD simulation batches.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53559 - Posted: 30 Jan 2020 | 3:57:06 UTC - in response to Message 53554.
Last modified: 30 Jan 2020 | 4:38:11 UTC

Ok, I did not know the server would cancel running WUs. Good to know. They would have run around 6h-ish, but I was not sure they wouldn't fail at the end due to large uploads.

The next test batch (MDADpr2) is out.

The MDADpr2 batch ain't small in their own right. 188MB upload only at 60% so far after an hour.

[Edit] Also see Toni made good on the credit re-adjustment. Now only getting a quarter of what was awarded prior for 4 times the length of processing time.
https://www.gpugrid.net/workunit.php?wuid=16977060
More in line with the previous batch of work.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53560 - Posted: 30 Jan 2020 | 5:50:57 UTC - in response to Message 53559.
Last modified: 30 Jan 2020 | 6:43:19 UTC

[Edit] Also see Toni made good on the credit re-adjustment. Now only getting a quarter of what was awarded prior for 4 times the length of processing time.

hm, for the first time that I read someone complaining about too high credit :-)

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53561 - Posted: 30 Jan 2020 | 8:08:35 UTC - in response to Message 53560.

[Edit] Also see Toni made good on the credit re-adjustment. Now only getting a quarter of what was awarded prior for 4 times the length of processing time.

hm, for the first time that I read someone complaining about too high credit :-)

My comment was simply an observation. The discussion about credit awarded among projects needs to be in another thread.

That has been hashed to death before many times over.

Search on CreditScrew or CreditNew. Oh where is Jeff Cobb?

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53563 - Posted: 30 Jan 2020 | 15:46:51 UTC - in response to Message 53541.

It's not just the speed. There's some DDOS prevention algorithm in operation, because my hosts gets blocked if they try to contact the server one by one in rapid succession (from the same public IP address).
What can we do to mitigate this effect???

OAS: Many projects are adding a Max # WUs option in Preferences. Maybe add it with the choice of 1 or 2.

OAS: Bunkering for serial projects should be banned one way or another. These "races" and "sprints" have some folks requesting as many WUs per host as they can get but they don't get submitted to the work server until after the race start time, i.e. bunkering.

I triggered something a few days ago on GPUGrid that I've never seen before on a BOINC project. It was a fluke combination of things that had me upgrade my drivers but delayed a reboot. It wouldn't have bothered anything else but an unbeknownst slug of GPUGrid WUs had appeared. All those WUs had computation errors. Then both computers got banned with a Project Request. I thought it would be a 24-hour timeout I'd seen folks mention before but it persisted for days. After a few days I tried a manual Project Update and it started working again. Can this Project Requested Ban be applied to bunkerers???

PrimeGrid has found a way to reduce bunkering - in the races, count only tasks that were both downloaded and returned during the period scheduled for the race.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53569 - Posted: 30 Jan 2020 | 19:26:35 UTC - in response to Message 53559.
Last modified: 30 Jan 2020 | 20:02:59 UTC

Keith Myers wrote:

Also see Toni made good on the credit re-adjustment. Now only getting a quarter of what was awarded prior for 4 times the length of processing time.

however, even now there are some unexplainable differences, e.g. between the following two tasks which ran on the same GPU (GTX980Ti) in the same PC:

http://www.gpugrid.net/result.php?resultid=21645452
runtime: 39.444 secs - 202.525 credit points

http://www.gpugrid.net/result.php?resultid=21645453
runtime: 39.899 secs - 168,771 credit points

any idea how come?

Edit: only now I realized what happened: the second above cited task missed the 24-hours limit by 1 minute 17 seconds. Hence the difference of credit by 20 % :-(

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53570 - Posted: 30 Jan 2020 | 21:38:06 UTC - in response to Message 53569.

Also Toni explained over in the QC Chemistry forum that tasks run for different lengths of times depending how many atoms are in the model.

So for the exact same MDADpr2 campaign, there can be differing credit awards depending on the task and whether it is hard to crunch or easy.

Throw on top of that the early return benefit and late return penalty, there can be a lot of variability.

davidBAM
Send message
Joined: 17 Sep 18
Posts: 11
Credit: 695,185,729
RAC: 0
Level
Lys
Scientific publications
watwatwat
Message 53572 - Posted: 31 Jan 2020 | 3:51:11 UTC
Last modified: 31 Jan 2020 | 4:41:43 UTC

I crunch competitively on up to 20 nVidia Turing cards and believe that every WU I do is returned within 24 hours.

You have already solved the 'bunkering' problem but if you want to improve the supply of WU to us volunteers it is very very simple. Just follow Primegrid's lead and remove GPUgrid from the projects white-listed by GridCoin. Keep it to unpaid volunteers

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 158
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 53573 - Posted: 31 Jan 2020 | 9:29:43 UTC - in response to Message 53502.

It is a little ironic that a project specially for GPU's supports less GPU's than other projects. Einstein, Milky Way, Seti, etc. no problem.

If i'm not wrong the problem is that they have not an "hard" gpu developer.
Today is not impossible to convert Cuda code to OpenCl, but it seems that they are not able to do this.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53574 - Posted: 31 Jan 2020 | 14:22:39 UTC - in response to Message 53572.

I crunch competitively on up to 20 nVidia Turing cards and believe that every WU I do is returned within 24 hours.

You have already solved the 'bunkering' problem but if you want to improve the supply of WU to us volunteers it is very very simple. Just follow Primegrid's lead and remove GPUgrid from the projects white-listed by GridCoin. Keep it to unpaid volunteers
I've got a better idea, avoid primegrid.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53575 - Posted: 31 Jan 2020 | 14:58:56 UTC - in response to Message 53573.

Today is not impossible to convert Cuda code to OpenCl, but it seems that they are not able to do this.

There is no reason to. They have more than enough volunteers with Nvidia cards, and it is simpler to support one set rather than two.

In fact, even if you went to OpenCL for both, I think it is harder to support both manufacturers from the problems I have seen. Supporting both is more for political-correctness reasons rather than need.

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53576 - Posted: 31 Jan 2020 | 15:05:38 UTC - in response to Message 53575.

[quote]... They have more than enough volunteers with Nvidia cards

and very often they don't have enough work for them. Hence, to bring, in addition, a second group of crunchers on bord would only enlarge the problem of "no tasks available" ...

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53577 - Posted: 31 Jan 2020 | 16:11:19 UTC - in response to Message 53573.

It is a little ironic that a project specially for GPU's supports less GPU's than other projects. Einstein, Milky Way, Seti, etc. no problem.

If i'm not wrong the problem is that they have not an "hard" gpu developer.
Today is not impossible to convert Cuda code to OpenCl, but it seems that they are not able to do this.

I've seen a program called swan that is supposed to be able to do this automatically. No idea if an up-to-date version is available.

I'd expect whether GPUGRID actually does this to depend on how fast the resulting OpenCL code runs. If it is much slower than the CUDA code, why would they want to release it?

Note - I found a version of swan, with a note saying that it is no longer maintained and is therefore deprecated. If you're good enough in both CUDA and OpenCL, why don't you take over maintenance of this program, and see if you can make it produce an OpenCL version of the GPUGRID code that runs fast enough to be worth releasing?

https://github.com/Acellera/swan

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53578 - Posted: 31 Jan 2020 | 16:22:36 UTC - in response to Message 53577.

As was correctly said above, it's not a technical problem, but a matter of putting effort where it is more critical, i.e. the scientific part (experiment preparation and analysis).

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,181,686,293
RAC: 1,793,796
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53579 - Posted: 31 Jan 2020 | 18:03:16 UTC - in response to Message 53572.

I crunch competitively on up to 20 nVidia Turing cards and believe that every WU I do is returned within 24 hours.

Cheers! Congratulation for your personal success so you are able to buy so many GPUs and maintain them crunching for all the years to come!
You have already solved the 'bunkering' problem but if you want to improve the supply of WU to us volunteers it is very very simple. Just follow Primegrid's lead and remove GPUgrid from the projects white-listed by GridCoin.

As I understand for the project team it is better to get the results sooner than later, so they are able to analyze and investigate them and issue new WUs if needed, rather than to wait for a few happy crunchers to crunch them for a long time (as might be the case with primegrid – just as you mentioned them), so they have an interest to have the biggest pool of Nvidia GPUs as possible at their disposal!
I never read, BOINC guaranties an un-interrupted work supply, so the volunteers will have always work to crunch.
Keep it to unpaid volunteers

PAID?! Where is this paid “volunteer”? Just as an example, I spend about USD 300.00 on electric bills per month just for BOINC, beside all the hardware I buy for BOINC - I would not buy, if I would not be an addict.
I earn about USD 9.00 equivalent of Gridcoins per month, so I would rather see it as a very small subsidy at best, or just another dope to keep me crunching BOINC!

davidBAM
Send message
Joined: 17 Sep 18
Posts: 11
Credit: 695,185,729
RAC: 0
Level
Lys
Scientific publications
watwatwat
Message 53580 - Posted: 31 Jan 2020 | 19:07:56 UTC - in response to Message 53579.

Thank you for the explanation. $9 is indeed a paltry amount

Miklos M.
Send message
Joined: 16 Jun 12
Posts: 17
Credit: 170,413,806
RAC: 0
Level
Ile
Scientific publications
watwatwatwat
Message 53584 - Posted: 1 Feb 2020 | 22:57:50 UTC

Got 3 today, for 4 computers, could use many more. It started great a few days back, but now getting too few.

Miklos M.
Send message
Joined: 16 Jun 12
Posts: 17
Credit: 170,413,806
RAC: 0
Level
Ile
Scientific publications
watwatwatwat
Message 53585 - Posted: 1 Feb 2020 | 23:17:43 UTC

Thank you Toni, just got one more.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53586 - Posted: 2 Feb 2020 | 1:25:18 UTC

The current situation at GPUGRID is definitely better than the situation at the Predictor@Home project for several months before it shut down. Their development team had split up. One part kept the server, the right to use the Predictor @Home name, and so on. The part that left took away the knowledge of how to create useful new workunits. The remainder of the team could only increase the number of failures each workunit could have every time a previous task for that workunit failed, so for several months. For several months, this meant that very few tasks were available, and all of them failed.

Which if any of you would prefer that situation?

Jacosito
Send message
Joined: 14 May 18
Posts: 7
Credit: 36,555,161
RAC: 0
Level
Val
Scientific publications
watwat
Message 53587 - Posted: 2 Feb 2020 | 11:30:13 UTC

The message is:
02/02/2020 7:38:19 | GPUGRID | Mensaje del servidor : New version of ACEMD needs 953.20MB more disk space. You currently have 2861.49 MB available and it needs 3814.70 MB.

I have free to use BOINC = 963.65GB

You currently have 2861.49 MB = 2.8 GB
available and it needs 3814.70 MB = 3.8GB

Then 3.8GB > 963GB???

Don't make me sense.

Chris
Send message
Joined: 11 Sep 19
Posts: 3
Credit: 176,750,964
RAC: 0
Level
Ile
Scientific publications
wat
Message 53588 - Posted: 2 Feb 2020 | 13:31:28 UTC

Anyone having issues with the GPU work units crashing their Geforce RTX 2080 Ti's?

Before this series came out this month my systems was working like a champ.

Suddenly this month it seems like something is making my system overheat if I enable the GPU tasks.

Have a Ryzen 9 3900X that I've been running full tilt for like 6 months now, no problems.

Then suddenly system hangs, with all the fans (CPU, GPU, Chassis, etc) all off and the motherboard unresponsive to the reset buttons and the power button. The LED's on the Chipset and Motherboard remain lit.

To resolve I have to turn power off at the PSU, and then boot.

The only thing that comes up in the system error logs is when I turn the PSU power off, that there is an unexpected Kernel power failure at that time (in the Windows error logs).

Almost like it is in a sleep/suspend mode, but all that is off.

Last night I left the GPU disabled and the CPU only tasks worked fine.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53589 - Posted: 2 Feb 2020 | 14:07:52 UTC - in response to Message 53588.

Anyone having issues with the GPU work units crashing their Geforce RTX 2080 Ti's?
They are working fine on my hosts.
Perhaps your RTX 2080Ti is overclocked (too much).
What PSU do you use?
Does it have two independent 8-pin PCI-E power connectors?
Are those connected to your RTX 2080Ti?

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53590 - Posted: 2 Feb 2020 | 14:20:31 UTC - in response to Message 53587.

Jacosito, in the BOINC Manager look at Options/Computing Preferences/Disk & Memory tab. There are 3 check boxes. I uncheck the first two and only check the third. Mine says "Use no more than 80% of total." Make sure you give BOINC permission to use enough storage.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53591 - Posted: 2 Feb 2020 | 14:36:33 UTC - in response to Message 53588.

Then suddenly system hangs, with all the fans (CPU, GPU, Chassis, etc) all off and the motherboard unresponsive to the reset buttons and the power button. The LED's on the Chipset and Motherboard remain lit.

To resolve I have to turn power off at the PSU, and then boot.
This describes behavior I see occasionally with my 1080 Ti's but I don't recall it happening on my 2080 Ti's. I don't know why it happens, I just reboot and it goes away. I never overclock and it's not specific to GG.

csbyseti
Send message
Joined: 4 Oct 09
Posts: 6
Credit: 1,087,450,695
RAC: 3,057,442
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53592 - Posted: 2 Feb 2020 | 18:15:51 UTC - in response to Message 53588.

Anyone having issues with the GPU work units crashing their Geforce RTX 2080 Ti's?

Before this series came out this month my systems was working like a champ.

Suddenly this month it seems like something is making my system overheat if I enable the GPU tasks.

Have a Ryzen 9 3900X that I've been running full tilt for like 6 months now, no problems.

Then suddenly system hangs, with all the fans (CPU, GPU, Chassis, etc) all off and the motherboard unresponsive to the reset buttons and the power button. The LED's on the Chipset and Motherboard remain lit.

To resolve I have to turn power off at the PSU, and then boot.

The only thing that comes up in the system error logs is when I turn the PSU power off, that there is an unexpected Kernel power failure at that time (in the Windows error logs).

Almost like it is in a sleep/suspend mode, but all that is off.

Last night I left the GPU disabled and the CPU only tasks worked fine.


If you have to switch of Power Supply AC Side, the Power Supply is blocked by Overcurrent or unstable DC-Voltage. Switching off resets the 'electronic' fuse.

There can be different reasons, overcurrent for Power supply itself, overcurrent detected by mainboard, unstable AC Input Voltage.

Perhaps the RTX2080ti got power load Peaks. The magazine ct has measured for a RTX2080 peaks of 380W without overclocking depending on Card model.

Nick Name
Send message
Joined: 3 Sep 13
Posts: 53
Credit: 1,533,531,731
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53593 - Posted: 2 Feb 2020 | 21:16:22 UTC - in response to Message 53588.

...Then suddenly system hangs, with all the fans (CPU, GPU, Chassis, etc) all off and the motherboard unresponsive to the reset buttons and the power button. The LED's on the Chipset and Motherboard remain lit...

I had a similar problem last year. I started seeing invalid work across multiple projects, gradually increasing for awhile until one day almost everything was failing. I found the power cables to the GPU had some burnt pins. Replacing that fixed it for awhile, then I started having problems exactly like you describe. This time I found burnt pins in the PSU. I replaced the PSU and eventually had to RMA the GPU, I think the PSU problems broke something. Fortunately it was repaired under warranty and works great now.

If your PSU power cables and connections to the GPU are ok then I would suspect and test for a failing GPU. Trying another PSU is also a good idea if you have the option.

This assumes you haven't done anything to change the GPU behavior, like overclock it or install new monitoring software. I once had major problems with a certain manufacturer's GPU utility, now I stick to Afterburner or Nvidia Inspector. If you've changed something like this, revert back.
____________
Team USA forum | Team USA page
Join us and #crunchforcures. We are now also folding:join team ID 236370!

Chris
Send message
Joined: 11 Sep 19
Posts: 3
Credit: 176,750,964
RAC: 0
Level
Ile
Scientific publications
wat
Message 53614 - Posted: 5 Feb 2020 | 1:17:07 UTC - in response to Message 53589.

Yes (2) 8 pin supplies to my RTX 2080Ti.

The powersupply is a Corsair CX750M.

I probabably am over-taxing it.

Nothing overclocked beyond factory OC (if any).

Thanks.

Was odd, before it ran with everything fully loaded all night long no problem.

Chris
Send message
Joined: 11 Sep 19
Posts: 3
Credit: 176,750,964
RAC: 0
Level
Ile
Scientific publications
wat
Message 53615 - Posted: 5 Feb 2020 | 1:18:29 UTC - in response to Message 53590.

Thanks.

Yes I tried that, still did it.

Right now disabling GPU work units.

I am thinking my powersupply is struggling as others suggested.

Shayol Ghul
Send message
Joined: 11 Aug 17
Posts: 2
Credit: 1,024,938,819
RAC: 0
Level
Met
Scientific publications
watwatwat
Message 53630 - Posted: 9 Feb 2020 | 12:52:03 UTC

At least your receiving work units. Last two weeks I have not received any work units. All equipment running good. Work units average five hours.Please send some work.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53632 - Posted: 9 Feb 2020 | 20:31:29 UTC - in response to Message 53630.

At least your receiving work units. Last two weeks I have not received any work units. All equipment running good. Work units average five hours.Please send some work.

Update your graphics drivers. You are using versions known to have problems with some OpenCL work.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,517,861,851
RAC: 8,636,022
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53633 - Posted: 9 Feb 2020 | 21:34:54 UTC - in response to Message 53632.

... some OpenCL work.

GPUGrid writes its apps in CUDA.

[VENETO] boboviz
Send message
Joined: 10 Sep 10
Posts: 158
Credit: 388,132
RAC: 0
Level

Scientific publications
wat
Message 53635 - Posted: 9 Feb 2020 | 22:34:15 UTC - in response to Message 53575.

Supporting both is more for political-correctness reasons rather than need.

You're right. There is no work for cuda, let alone for opencl

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,181,686,293
RAC: 1,793,796
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53641 - Posted: 11 Feb 2020 | 21:03:08 UTC - in response to Message 53614.

The powersupply is a Corsair CX750M.

I have the same power supply Corsair CX750M. And I have the same problem you describe:
Suddenly this month it seems like something is making my system overheat if I enable the GPU tasks.

Have a Ryzen 9 3900X that I've been running full tilt for like 6 months now, no problems.

Then suddenly system hangs, with all the fans (CPU, GPU, Chassis, etc) all off and the motherboard unresponsive to the reset buttons and the power button. The LED's on the Chipset and Motherboard remain lit.

To resolve I have to turn power off at the PSU, and then boot.).

My system is AMD 1700x and a GTX1070. I tried to resolve the problem by lowering clocks on the CPU since the beginning. What seems to help, is lowering the frequency of the GPU by 120 MHz and increase the fan speed to 97% on this particular GPU. But still the computer freezes frequently.
Lately I was wondering if it might be the PSU as well, as I had a "bluescreen" problem on another computer, I solved with a certified, higher Watt PSU a few years ago.
So it seems to me, that this might be a bad PSU design for 24/7 crunching.

Jacosito
Send message
Joined: 14 May 18
Posts: 7
Credit: 36,555,161
RAC: 0
Level
Val
Scientific publications
watwat
Message 53642 - Posted: 12 Feb 2020 | 2:47:39 UTC - in response to Message 53590.
Last modified: 12 Feb 2020 | 2:48:05 UTC

The same, not WU.

Jacosito
Send message
Joined: 14 May 18
Posts: 7
Credit: 36,555,161
RAC: 0
Level
Val
Scientific publications
watwat
Message 53643 - Posted: 12 Feb 2020 | 2:49:50 UTC - in response to Message 53588.

Can you send me your app_config.xml?

My GPU and CPU, both with liquid refrigeration.

Cheers

Bruce Downing
Send message
Joined: 20 Jul 09
Posts: 14
Credit: 294,872,142
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53651 - Posted: 16 Feb 2020 | 0:53:17 UTC - in response to Message 53462.

I'm not getting any work units. Why?
____________

Bruce Downing
Send message
Joined: 20 Jul 09
Posts: 14
Credit: 294,872,142
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53652 - Posted: 16 Feb 2020 | 0:56:23 UTC

I get "no tasks available" over and over
____________

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53653 - Posted: 16 Feb 2020 | 1:28:06 UTC - in response to Message 53652.

I get "no tasks available" over and over


Why don't you click on Donate and send them enough money that they can hire another person to create workunits?

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 566
Credit: 5,845,077,024
RAC: 12,896,769
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53654 - Posted: 16 Feb 2020 | 13:55:27 UTC - in response to Message 53653.

Why don't you click on Donate and send them enough money that they can hire another person to create workunits?

I decided to catch your suggest on the fly.
Always claiming for new WUs/features, I also think that it may be fair to collaborate with some counterpart, beyond our computing power.
But unfortunately, it seems that Donation form is currently unavailable...

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53656 - Posted: 16 Feb 2020 | 17:50:59 UTC - in response to Message 53654.

I should be able to make WUs this week.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53657 - Posted: 16 Feb 2020 | 18:16:07 UTC - in response to Message 53656.

I should be able to make WUs this week.

Yippee!!! All 50,000?

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53658 - Posted: 16 Feb 2020 | 18:16:23 UTC - in response to Message 53656.

I should be able to make WUs this week.

thanks, Toni, for the information :-)

marsinph
Send message
Joined: 11 Feb 18
Posts: 41
Credit: 579,891,424
RAC: 0
Level
Lys
Scientific publications
wat
Message 53659 - Posted: 16 Feb 2020 | 19:18:20 UTC

Wait and see !!!
Toni, so many times announcing WU and never something.

And when there are, the first take all WU, crunch it of course, but nothing for the other.

It will say the first takes all, and so can falsify the world competition.
I suggest to limit the WU by user, not by host.
A user with hundred GPU, receive hundred of of WU, user with one or two GPU, receive ... nothing !!!

And world competitions is so falsified (like SETIBZH )

It is more or less the same on Xanson, latest year !
The user who knows when WU will be released have a serious advantage, can take a lot of and nothing for other.

So I suggest ti set limitations also on user. Limitations on 2WU/GPU is not enough.



Nick Name
Send message
Joined: 3 Sep 13
Posts: 53
Credit: 1,533,531,731
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 53660 - Posted: 17 Feb 2020 | 0:23:40 UTC - in response to Message 53659.

Wait and see !!!
Toni, so many times announcing WU and never something.

And when there are, the first take all WU, crunch it of course, but nothing for the other.

It will say the first takes all, and so can falsify the world competition.
I suggest to limit the WU by user, not by host.
A user with hundred GPU, receive hundred of of WU, user with one or two GPU, receive ... nothing !!!

And world competitions is so falsified (like SETIBZH )

It is more or less the same on Xanson, latest year !
The user who knows when WU will be released have a serious advantage, can take a lot of and nothing for other.

So I suggest ti set limitations also on user. Limitations on 2WU/GPU is not enough.

I enjoy competing as much as anyone but competition and stats are the last thing projects should be worried about. They should focus on doing good and useful science first, and issue work according to their needs in whatever manner best meets their goals. There is zero reason to limit work here as you describe, unless the project sees a need for it. Some getting more work than others and some sort of potential stat distortion isn't compelling enough to make such drastic changes.

Let's remember that we serve the project, not the other way around. If they're getting work done in a timely manner and getting the results they want, that's really what counts.
____________
Team USA forum | Team USA page
Join us and #crunchforcures. We are now also folding:join team ID 236370!

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53661 - Posted: 17 Feb 2020 | 0:29:12 UTC - in response to Message 53659.

It will say the first takes all, and so can falsify the world competition.
This is not a competition. This is cooperation.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 53662 - Posted: 17 Feb 2020 | 0:41:45 UTC - in response to Message 53661.

It will say the first takes all, and so can falsify the world competition.
This is not a competition. This is cooperation.


👍👍 May I also add that being here volunteering time on a GPU that takes more than 80 hours to complete and upload a WU is actually delaying the project, IMHO.
Do you agree, Retvari? I was finished with all mine a week ago.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53671 - Posted: 17 Feb 2020 | 20:55:16 UTC - in response to Message 53659.

A user with hundred GPU, receive hundred of of WU, user with one or two GPU, receive ... nothing !!!

Sorry, but this is silly. When there is work, a user with 100 GPUs gets at most 100 WUs, whereas a user with 1 GPU gets at most 2 WUs. When no work is available, none of them get any work. How is this not fair? You are not seriously suggesting that the scientists limit their progress rate so that users with fewer GPUs can have bigger numbers on BOINCstats etc.?

MrS
____________
Scanning for our furry friends since Jan 2002

WPrion
Send message
Joined: 30 Apr 13
Posts: 87
Credit: 1,065,409,111
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53673 - Posted: 17 Feb 2020 | 22:28:59 UTC

Competition?!? Pffftt! I'm in it for the money. Cheques are mailed out at the end of the month, right?

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53679 - Posted: 18 Feb 2020 | 8:11:10 UTC

I create WUs when they are actually useful. It may not be appropriate for a competition setting.

Competition can not be the correct motivation in my opinion: it would not provide an advantage to science, while leading to "busy work", credit inflation, cheating and all sort of nasties.

davidBAM
Send message
Joined: 17 Sep 18
Posts: 11
Credit: 695,185,729
RAC: 0
Level
Lys
Scientific publications
watwatwat
Message 53680 - Posted: 18 Feb 2020 | 9:43:58 UTC - in response to Message 53679.

@Toni - you must absolutely do what you consider best for the project at all times.

However, I do also think that the challenges such as Formula Boinc, BoincStats etc are truly excellent in that they provide other reasons for people to become / remain interested. Challenges provide a willing and motivated source of crunchers so that is quite clearly an advantage to science.

The only 'nasties' I am aware of are basically people who THINK that there is cheating taking place - or that it is even possible

jiipee
Send message
Joined: 4 Jun 15
Posts: 19
Credit: 6,013,040,696
RAC: 7,650,733
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 53711 - Posted: 21 Feb 2020 | 8:09:15 UTC - in response to Message 53641.

The powersupply is a Corsair CX750M.

I have the same power supply Corsair CX750M. And I have the same problem you describe:
Suddenly this month it seems like something is making my system overheat if I enable the GPU tasks.

Have a Ryzen 9 3900X that I've been running full tilt for like 6 months now, no problems.

Then suddenly system hangs, with all the fans (CPU, GPU, Chassis, etc) all off and the motherboard unresponsive to the reset buttons and the power button. The LED's on the Chipset and Motherboard remain lit.

To resolve I have to turn power off at the PSU, and then boot.).

My system is AMD 1700x and a GTX1070. I tried to resolve the problem by lowering clocks on the CPU since the beginning. What seems to help, is lowering the frequency of the GPU by 120 MHz and increase the fan speed to 97% on this particular GPU. But still the computer freezes frequently.
Lately I was wondering if it might be the PSU as well, as I had a "bluescreen" problem on another computer, I solved with a certified, higher Watt PSU a few years ago.
So it seems to me, that this might be a bad PSU design for 24/7 crunching.

I have Asus B85M-E mobo with i5-4690K 4 core cpu and RTX 2080 super gpu. This system has Corsair HX1000i psu. Steady running GPUGRID with Win10, gpu load rate 96-97 percent. Psu fan hardly ever rotates, usually only during power-on stage.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 566
Credit: 5,845,077,024
RAC: 12,896,769
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53712 - Posted: 21 Feb 2020 | 11:15:19 UTC

Watching to the header of this thread: “Large scale experiment: MDAD”
Large scale suits perfectly the current situation: 211,715 WUs ready to send... And growing.
Amusing

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53778 - Posted: 27 Feb 2020 | 4:55:17 UTC

Where can I find information on the relative performances of GTX 10 series graphics boards to the newer GTX 16 and RTX 20 series boards for MDAD workunits?

I'm not interested in their relative performance for games.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53780 - Posted: 27 Feb 2020 | 8:43:41 UTC - in response to Message 53778.
Last modified: 27 Feb 2020 | 8:44:08 UTC

Where can I find information on the relative performances of GTX 10 series graphics boards to the newer GTX 16 and RTX 20 series boards for MDAD workunits?

These data are based on Folding@home work":

https://docs.google.com/spreadsheets/d/1vcVoSVtamcoGj5sFfvKF_XlvuviWWveJIg_iZ8U2bf0/pub?output=html

https://docs.google.com/spreadsheets/d/1v5gXral3BcFOoXs5n1M6l_Uo3pZpQYogn6gVlxRPnz0/edit#gid=0

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 53874 - Posted: 7 Mar 2020 | 13:34:34 UTC - in response to Message 53513.

The purpose of the work is (broadly speaking) methods development, i.e. build a dataset to improve the foundation of future MD-based research (not just GPUGRID). More details may come if it works ;)

Still waiting with bated breath for more details...

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53884 - Posted: 10 Mar 2020 | 17:39:16 UTC

I'm receiving many tasks which are the last one of their batch:

1nkvA00_450_0-TONI_MDADpr4sn-9-10-RND4090_0

Or near the end of their batch:
1gaxA04_348_0-TONI_MDADpr4sg-8-10-RND1850_0

Total number of tasks in the batch
The sequential number of the given task within the batch (starting number is 0)

I expect the number of unsent tasks in the queue will drop significantly during the next days.
There are 305.826 unsent tasks as I wrote this.

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53885 - Posted: 10 Mar 2020 | 18:02:11 UTC - in response to Message 53884.

Good. I've raised the priority for a selection of the WUs because they were coming in too slow.

Win10
Send message
Joined: 28 Sep 17
Posts: 1
Credit: 398,126,739
RAC: 0
Level
Asp
Scientific publications
watwatwat
Message 53889 - Posted: 12 Mar 2020 | 0:21:47 UTC

Why does the ACEMD need so much free disk space?

I cant get work and get an error message:

Message from Server: New version of ACEMD needs 3814.70MB more disk space. You currently have 0.00 MB available and it needs 3814.70 MB.

Keith Myers
Send message
Joined: 13 Dec 17
Posts: 1280
Credit: 4,854,031,959
RAC: 4,370,637
Level
Arg
Scientific publications
watwatwatwatwat
Message 53890 - Posted: 12 Mar 2020 | 1:06:13 UTC - in response to Message 53889.

You need to increase your disk limits in the Manager or in your Preferences at the website for your host.

That will give enough space to download more work to your host.

stefkoch
Send message
Joined: 13 Mar 20
Posts: 1
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 53901 - Posted: 13 Mar 2020 | 11:38:02 UTC

So the question on what calculations are actually being executed on the machines is not really answered yet IMO.
"Methods development" sounds rather vague.
Do you plan on open sourcing your code which is being run?
What methods are being developed and evaluated?
Is there any github repo for the MDAD code?

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53904 - Posted: 13 Mar 2020 | 13:19:53 UTC - in response to Message 53541.

[snip]

OAS: Bunkering for serial projects should be banned one way or another. These "races" and "sprints" have some folks requesting as many WUs per host as they can get but they don't get submitted to the work server until after the race start time, i.e. bunkering.

[snip]

A way to stop bunkering: Persuade all projects that have such races or sprints to require that only WUs downloaded during the race count toward the race.

davidBAM
Send message
Joined: 17 Sep 18
Posts: 11
Credit: 695,185,729
RAC: 0
Level
Lys
Scientific publications
watwatwat
Message 53906 - Posted: 13 Mar 2020 | 13:32:40 UTC - in response to Message 53904.

Vastly easier to show per-user stats on average turnaround time. Much of the code must already exist as the project already awards bonuses based on turnaround time.

That would quantify the extent of the perceived bunkering 'problem'

Erich56
Send message
Joined: 1 Jan 15
Posts: 1087
Credit: 6,444,531,926
RAC: 26,519,051
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 53910 - Posted: 13 Mar 2020 | 17:08:09 UTC - in response to Message 53884.

Zoltan wrote on March 10th:

I expect the number of unsent tasks in the queue will drop significantly during the next days.
There are 305.826 unsent tasks as I wrote this.

well, there are still 300.349 left at this point of time (= i.e. 3 days after your posting)
:-)

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53920 - Posted: 14 Mar 2020 | 17:59:12 UTC - in response to Message 53910.

Zoltan wrote on March 10th:
I expect the number of unsent tasks in the queue will drop significantly during the next days.
There are 305.826 unsent tasks as I wrote this.

well, there are still 300.349 left at this point of time (= i.e. 3 days after your posting)
:-)
The highest number of unsent workunits was over 310.000 so every 5.000 drop is 1.61%.
Now there's 297.472 unsent workunits which decreased to this by 5 since I started posting this.
310k to 297k workunits is 4.2% percent decrease in about 13 days.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53952 - Posted: 20 Mar 2020 | 3:41:57 UTC

Could you mention whether the MDAD work happens to be related to COVID-19?

Sabrina Tarson
Send message
Joined: 19 Jan 12
Posts: 1
Credit: 20,965,042
RAC: 0
Level
Pro
Scientific publications
wat
Message 53953 - Posted: 20 Mar 2020 | 4:12:13 UTC - in response to Message 53952.

Could you mention whether the MDAD work happens to be related to COVID-19?


The last thing that Toni mentioned about the purpose of these workunits was this.

The purpose of the work is (broadly speaking) methods development, i.e. build a dataset to improve the foundation of future MD-based research (not just GPUGRID). More details may come if it works ;)

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 53962 - Posted: 21 Mar 2020 | 18:21:56 UTC - in response to Message 53952.

Could you mention whether the MDAD work happens to be related to COVID-19?


MDAD workunits are an ambitious effort to map the protein conformational space. Although the scope of the work is general, we expect that virion proteins will be among the very first test cases.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 53964 - Posted: 21 Mar 2020 | 19:31:12 UTC - in response to Message 53962.

Could you mention whether the MDAD work happens to be related to COVID-19?


MDAD workunits are an ambitious effort to map the protein conformational space. Although the scope of the work is general, we expect that virion proteins will be among the very first test cases.

Thank you.

Dotsch
Send message
Joined: 2 Jul 07
Posts: 31
Credit: 424,791
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 54010 - Posted: 24 Mar 2020 | 9:28:26 UTC - in response to Message 53962.

Could you mention whether the MDAD work happens to be related to COVID-19?


MDAD workunits are an ambitious effort to map the protein conformational space. Although the scope of the work is general, we expect that virion proteins will be among the very first test cases.

Do you have also planned any CPU work, or is it GPU only?
Would be happy, if my system can also compute different efforts / methods on COVID.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54018 - Posted: 24 Mar 2020 | 12:48:39 UTC

Rosetta@Home has announced that they are doing some COVID-19 work, CPU only.

Atri
Send message
Joined: 15 Mar 20
Posts: 4
Credit: 752,098
RAC: 0
Level
Gly
Scientific publications
wat
Message 54019 - Posted: 24 Mar 2020 | 12:57:51 UTC

MDAD cosa studia questo progetto?
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54075 - Posted: 26 Mar 2020 | 0:34:08 UTC - in response to Message 54018.

Rosetta@Home has announced that they are doing some COVID-19 work, CPU only.

They're being issued, I have 6 machines running the Rosetta Covid-19 WUs.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 388,572
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54076 - Posted: 26 Mar 2020 | 1:42:13 UTC - in response to Message 54075.

Rosetta@Home has announced that they are doing some COVID-19 work, CPU only.

They're being issued, I have 6 machines running the Rosetta Covid-19 WUs.

They also announced that not all of the workunits related to COVID-19 have COVID-19 in their names. Some of those with foldit in their names are also related.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54081 - Posted: 26 Mar 2020 | 15:03:50 UTC - in response to Message 54076.
Last modified: 26 Mar 2020 | 15:04:42 UTC

Rosetta@Home has announced that they are doing some COVID-19 work, CPU only.
They're being issued, I have 6 machines running the Rosetta Covid-19 WUs.
They also announced that not all of the workunits related to COVID-19 have COVID-19 in their names. Some of those with foldit in their names are also related.

Exactly.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 54112 - Posted: 27 Mar 2020 | 14:29:36 UTC

Did we do this running ACEMD3???
https://www.youtube.com/watch?v=LlyofuzgsDo

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 54121 - Posted: 27 Mar 2020 | 18:17:11 UTC

Toni, I could deliver more work if you'd up our ration to 3 WUs per GPU to eliminate the pregnant pauses.

All, Check out the Diamond Project with contributors from around the world including GDF. Looking for a protease inhibitor to disrupt SARS-CoV-2 protein processing and virion assembly.

https://www.diamond.ac.uk/covid-19

spRocket
Send message
Joined: 27 Mar 20
Posts: 4
Credit: 42,538,988
RAC: 0
Level
Val
Scientific publications
wat
Message 54143 - Posted: 28 Mar 2020 | 21:56:28 UTC

I'm one of the newcomers, and I'm finding so far that my GTX 960/Ryzen 7-1700 combo is happily crunching away with 14 CPU threads on Rosetta, one plus the GPU for GPUGRID, and one in reserve.

Unfortunately, the only other NVIDIA GPU I have is dreadfully obsolete (a 7600), and my budget doesn't provide for a new GPU at the moment. Still, it's nice to be able to contribute what I can.

Aurum
Avatar
Send message
Joined: 12 Jul 17
Posts: 399
Credit: 13,024,025,382
RAC: 1,853,638
Level
Trp
Scientific publications
watwatwat
Message 54145 - Posted: 29 Mar 2020 | 15:29:41 UTC

The Washington Post has a video of the work on Covid-19 at the University of Torno:
https://www.washingtonpost.com/video/world/as-coronavirus-ravages-spain-scientists-are-in-a-race-against-time/2020/03/27/991a7c53-9a6f-4d15-aa49-d5bdc193ed57_video.html

Atri
Send message
Joined: 15 Mar 20
Posts: 4
Credit: 752,098
RAC: 0
Level
Gly
Scientific publications
wat
Message 54201 - Posted: 2 Apr 2020 | 8:33:12 UTC - in response to Message 53462.

ciao il progetto è rivoloto al covid-19?

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 54202 - Posted: 2 Apr 2020 | 8:46:21 UTC - in response to Message 54201.

ciao il progetto è rivoloto al covid-19?



https://www.gpugrid.net/forum_thread.php?id=5089#54179

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 54405 - Posted: 21 Apr 2020 | 18:33:25 UTC

Toni, a question if I may.
Are the ACEMD tasks marked PABLO part of the workspace environment mapping project also?
Cheers.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54409 - Posted: 21 Apr 2020 | 23:45:41 UTC - in response to Message 54405.

Toni, a question if I may.
Are the ACEMD tasks marked PABLO part of the workspace environment mapping project also?
Cheers.
No, they don't have the MDAD in their name.
They are a follow-up of a previous batch from 2019.

Pop Piasa
Avatar
Send message
Joined: 8 Aug 19
Posts: 252
Credit: 458,054,251
RAC: 0
Level
Gln
Scientific publications
watwat
Message 54410 - Posted: 22 Apr 2020 | 1:21:07 UTC - in response to Message 54409.

They are a follow-up of a previous batch from 2019.


Thanks much, RZ. Is it maybe a rerun of the batch without wrappers that were labeled as long runs a couple weeks ago?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 16,606
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 54477 - Posted: 28 Apr 2020 | 11:34:19 UTC - in response to Message 54410.

They are a follow-up of a previous batch from 2019.

Thanks much, RZ. Is it maybe a rerun of the batch without wrappers that were labeled as long runs a couple weeks ago?

Most probably this is it.

TuxNews
Send message
Joined: 7 Jan 19
Posts: 2
Credit: 218,212
RAC: 0
Level

Scientific publications
wat
Message 55041 - Posted: 5 Jun 2020 | 12:28:31 UTC
Last modified: 5 Jun 2020 | 12:29:39 UTC

@Toni Can we know what are we crunching? tons of users asked that, we're curious.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55043 - Posted: 6 Jun 2020 | 0:13:25 UTC - in response to Message 55041.

@Toni Can we know what are we crunching? tons of users asked that, we're curious.

This post by Toni briefly describes the purpose of the current work units:
https://gpugrid.net/forum_thread.php?id=5121&nowrap=true#54701

TuxNews
Send message
Joined: 7 Jan 19
Posts: 2
Credit: 218,212
RAC: 0
Level

Scientific publications
wat
Message 55044 - Posted: 6 Jun 2020 | 7:06:07 UTC - in response to Message 55043.

Thanks!

rod4x4
Send message
Joined: 4 Aug 14
Posts: 266
Credit: 2,219,935,054
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 55045 - Posted: 7 Jun 2020 | 0:26:16 UTC - in response to Message 55043.

@Toni Can we know what are we crunching? tons of users asked that, we're curious.

This post by Toni briefly describes the purpose of the current work units:
https://gpugrid.net/forum_thread.php?id=5121&nowrap=true#54701

Another post here from Gianni descibing the MDAD work units.
http://www.gpugrid.net/forum_thread.php?id=5089&nowrap=true#54172

Post to thread

Message boards : News : Large scale experiment: MDAD

//