Message boards : News : Long WUs are out - 50% bonus
Author | Message |
---|---|
Double sized WUs are out: *variant*_long* and include a bonus of the 50% on the credits. | |
ID: 19912 | Rating: 0 | rate: / Reply Quote | |
Double sized WUs are out: *variant*_long* and include a bonus of the 50% on the credits. Is there a possibility to opt out of those monsters, as they would probably crash the 2-day deadline an normal computers? Or do we have to abort them manually? And how do we recognize them? ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 19913 | Rating: 0 | rate: / Reply Quote | |
First one of these is completed in 7h32m (27116.891s). | |
ID: 19916 | Rating: 0 | rate: / Reply Quote | |
10-IBUCH_8_variantP_long-0-2-RND0787_0 | |
ID: 19917 | Rating: 0 | rate: / Reply Quote | |
Well, | |
ID: 19919 | Rating: 0 | rate: / Reply Quote | |
Ignasi, | |
ID: 19920 | Rating: 0 | rate: / Reply Quote | |
It seems that long WUs aren't afterall that much gain for everybody. We have dropped by 1000 WUs in progress in one single day. We may be trying to push the computations too far here. The last thing we want is to scare people out and run away from GPUGRID. You may have to consider external factors, as well as your own internal choices. Do you have a medium/long term record of that "WUs in progress" figure? I suspect that you may have been affected by that other big NVidia beast in the BOINC jungle - SETI@home. They have been effectively out of action since the end of October. You may well have been benefitting from extra resources during that time. SETI has been (slowly and intermittently) getting back up to speed over the last week or so, and as they do so, you will inevitably lose some volunteers (or some share of their machines). Having said that, I've got IBUCH_*_variantP_long running on all four hosts at the moment - I'll comment on your other questions when they've finished, in 5 - 38 hours from now. | |
ID: 19921 | Rating: 0 | rate: / Reply Quote | |
Sure. | |
ID: 19922 | Rating: 0 | rate: / Reply Quote | |
We have dropped by 1000 WUs in progress in one single day. In progress includes running tasks and queued tasks. As running tasks will take longer there are less queued tasks, that's why there are less in progress. You will need to wait for a few days to equilibrate. Don't hit the panic button. | |
ID: 19923 | Rating: 0 | rate: / Reply Quote | |
Well, Ignasi, you don't read the other topics in your forum, do you? :) I think you should separate the _long_ workunits from the normal WUs, or even create _short_ WUs for crunchers with older cards. If you couldn't develop an automated separation process, you should make it possible for the users to do it on their own (but not by aborting long WUs one by one manually). There are some computers equipped with multiple, but very different cards, so this is a very complicated problem. The best solution would be limiting the running time of a WU, instead of using a fixed simulation timeframe. (As far as I know, this is almost impossible for GPUGRID to implement.) My other project (rosetta@home) gives the user the opportunity of setting a desired WU running time, you too should give this opportunity in some way or another. My computers are on 24/7 so I could (and I would) do even a 100ns simulation, if it were up to me, and if the credit/time ratio would be the same (or higher). Another way to get around this problem: you should create a new project under BOINC for these _long_ WUs - let's say it's called FermiGRID - and encourage users with faster cards to join FermiGRID, and set GPUGRID as a backup project. You should contact NVidia about the naming of the new project before it starts, maybe they consider it as advertisement (and give you something in exchange), or maybe they consider it as a copyright infringement and send their lawyers to sue you for it. :) | |
ID: 19926 | Rating: 0 | rate: / Reply Quote | |
... Well, one has now failed - task 3445790. Exit status -40 (0xffffffffffffffd8) SWAN: FATAL : swanBindToTexture1D failed -- texture not found I hope that isn't a low memory outcome on a 512MB card - if it is, the others are going to go the same way. And I hope the new shorter batch aren't all going to be *_HIVPR_n1_[un]bound_* - they are just as difficult on my cards. | |
ID: 19927 | Rating: 0 | rate: / Reply Quote | |
@skgiven | |
ID: 19928 | Rating: 0 | rate: / Reply Quote | |
I do my best, thank you. I'm sorry, I didn't mean to offend you. The *long* WUs are not meant to be something regular at all. We always stated that. We, crunchers, on the other end of the project, see this problem from a very different view. When a WU fails, the cruncher will be disappointed, and if many WUs keep on failing, the cruncher will leave this project for a more successful one. We don't see the progress of the subprojects you are working on, and cannot choose the appropriate subproject for our GPUs. The solution for this issue is not just extending WU length. It has to be well thought out and degrees of freedom to adjust, pinpointed. Classification of WU by card comp. capacity is certainly an option. I'm (and I suppose every cruncher in this forum are) just guessing what could be the best solution for the project, because I don't have the information needed to pick (to invent) the right one. But you (I mean GPUGRID) have that information (the precise number of the crunchers, and their WU returning time). You just have to process that information wisely. You should create a little application for simulating GPUGRID (if you don't have one already). This application have to have some variables in it, for example WU length, subprojects, and processing realiability. If you play with those variables a little from time to time, you can choose the best thing to change in the whole project. You can even implement a simulation of a very different GPUGRID, to see if the new one worth the hassle. Just like when SETI were transformed to BOINC. | |
ID: 19930 | Rating: 0 | rate: / Reply Quote | |
Ignasi, | |
ID: 19933 | Rating: 0 | rate: / Reply Quote | |
I just cruised by to say I'm lovin' the new wu's. ~21 hours on GTX260 but only ~55% utilization of the card. ??? Nice credits too :) | |
ID: 19935 | Rating: 0 | rate: / Reply Quote | |
The problem is that I have abused of them in the rush for getting key results back for publication. The solution for this issue is not just extending WU length. It has to be well thought out and degrees of freedom to adjust, pinpointed. Classification of WU by card comp. capacity is certainly an option. That's what I asked for in this thread. At the moment, and especially with sending such monsters to everyone participating, even normal crunchers, running their cards only 8h per day, and not the latest, most expensive cards, you are alienating those crunchers. I had to abort one of these monsters after 15h, because it would never have made the 48h deadline, and better to waste just 15h than to waste 48h. As you seem to know quite good beforehand how demanding your WUs will be, the fixed credits give a good clue to that, you could do such adjustment could be only to allow certain types of WUs for the crunchers, i.e. no bigger than 5000 credits claim for mine usually, if nothing else is available give up to 8000 claim. I think you could do that even on the scheduler, although I'm no programmer, but it shouldn't be so hard to put the GPUs on our puters in classes and send them according to their capabilities. Those capabilities are known to BOINC. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 19936 | Rating: 0 | rate: / Reply Quote | |
Computer ID 47762 | |
ID: 19937 | Rating: 0 | rate: / Reply Quote | |
Computer ID 47762 | |
ID: 19939 | Rating: 0 | rate: / Reply Quote | |
Ignasi, | |
ID: 19940 | Rating: 0 | rate: / Reply Quote | |
Presently, tasks are allocated acording to what Cuda capability your card has, which in turn is determined by your driver: | |
ID: 19948 | Rating: 0 | rate: / Reply Quote | |
No problems with the long | |
ID: 19956 | Rating: 0 | rate: / Reply Quote | |
Presently, tasks are allocated acording to what Cuda capability your card has, which in turn is determined by your driver: My compute capability of 1.2 hasn't changed with the change of drivers, only the Cuda version number. NVIDIA GPU 0: GeForce GT 240 (driver version unknown, CUDA version 3020, compute capability 1.2, 511MB, 257 GFLOPS peak) If you don't use what you already know from a) our compute capabilities and b) the specific requiremants for each and every WU to give the computers just those WUs that they are capable of, it's your active decision to waste computing power by giving demanding WUs to low performing computers. You know beforehand what will be wasted, you just don't care about it. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 19958 | Rating: 0 | rate: / Reply Quote | |
Have we run out of *long* wu's? I was getting one about 1 in 4 but now they have stopped or has something else changed? I like them :) | |
ID: 19959 | Rating: 0 | rate: / Reply Quote | |
Hmmmm. task 3445649 | |
ID: 19960 | Rating: 0 | rate: / Reply Quote | |
Have we run out of *long* wu's? I was getting one about 1 in 4 but now they have stopped or has something else changed? I like them :) You're welcome to my resend. Have fun with it ;-) | |
ID: 19961 | Rating: 0 | rate: / Reply Quote | |
Another of my hosts has just completed WU 2166545 - p19-IBUCH_10_variantP_long-0-2-RND8229. | |
ID: 19965 | Rating: 0 | rate: / Reply Quote | |
I would be more worried about the "No heartbeat from core client for 30 sec - exiting" message. | |
ID: 19966 | Rating: 0 | rate: / Reply Quote | |
I would be more worried about the "No heartbeat from core client for 30 sec - exiting" message. Twice, early in the task lifetime? No, I'm not worried about that. I did do this month's Windows security updates in the middle of this run (under manual supervision - I don't allow fully automatic updates), and had a couple of lockups afterwards. But I used the machine for some layout work earlier this evening, and it was running fine: and the error happened while the machine was otherwise idle, and I was monitoring the tasks remotely via BoincView as normal. I'm more worried about "SWAN: FATAL : swanBindToTexture1D failed -- texture not found", which - like you - I suspect to be an application or task definition error: probably the latter, since a similar task has just finished successfully on an identical card to my first observation of that error message. | |
ID: 19967 | Rating: 0 | rate: / Reply Quote | |
A second task is not sent out until 48h after the initial task is sent (normally), so as yours returned in 48.5h then I guess the server just did not get round to issuing a resend before your task returned. Your task would have been fully used even if another task had been issued, as your task would be the first back, and if a task was sent out again, in most cases it would not start automatically, so if you return slightly later than 48h your task is still the most useful; un-started resends can be recalled. However, after 3 or 4days this is no longer the case; resends would have completed. Sometimes however the resends fail, but not too often, as they tend to go to reliable and faster cards. Begs the question why this allocation method is not initially used to determine long task hosts. | |
ID: 19968 | Rating: 0 | rate: / Reply Quote | |
A second task is not sent out until 48h after the initial task is sent (normally), so as yours returned in 48.5h then I guess the server just did not get round to issuing a resend before your task returned. Your task would have been fully used even if another task had been issued, as your task would be the first back, and if a task was sent out again, in most cases it would not start automatically, so if you return slightly later than 48h your task is still the most useful; un-started resends can be recalled. However, after 3 or 4days this is no longer the case; resends would have completed. Sometimes however the resends fail, but not too often, as they tend to go to reliable and faster cards. Begs the question why this allocation method is not initially used to determine long task hosts. We know all this. My comment was primarily aimed at Saenger, who seemed to be under the impression that a new task would be created, allocated, downloaded, and run unconditionally at 48 hours and 1 second - thus wasting electricity and CPU cycles on the second host. I thought a counter-example might help to set his mind at rest. | |
ID: 19969 | Rating: 0 | rate: / Reply Quote | |
I've received two reissued IBUCH_*_variantP_long_ WUs. | |
ID: 19972 | Rating: 0 | rate: / Reply Quote | |
A second task is not sent out until 48h after the initial task is sent (normally), so as yours returned in 48.5h then I guess the server just did not get round to issuing a resend before your task returned. Your task would have been fully used even if another task had been issued, as your task would be the first back, and if a task was sent out again, in most cases it would not start automatically, so if you return slightly later than 48h your task is still the most useful; un-started resends can be recalled. However, after 3 or 4days this is no longer the case; resends would have completed. Sometimes however the resends fail, but not too often, as they tend to go to reliable and faster cards. Begs the question why this allocation method is not initially used to determine long task hosts. OK, so it's not 48h but 49 or 50, so what? It's far before the communicated deadline of 4 days that the WU is in reality being ditched. ____________ Gruesse vom Saenger For questions about Boinc look in the BOINC-Wiki | |
ID: 19975 | Rating: 0 | rate: / Reply Quote | |
http://www.gpugrid.net/workunit.php?wuid=2166431 | |
ID: 19976 | Rating: 0 | rate: / Reply Quote | |
http://www.gpugrid.net/workunit.php?wuid=2166431 ...and the other one I've mentioned (Workunit 2166526) is a fine example of the reason not to send these to hosts with low RAC. | |
ID: 19977 | Rating: 0 | rate: / Reply Quote | |
...and the other one I've mentioned (Workunit 2166526) is a fine example of the reason not to send these to hosts with low RAC. Actually, the RAC is a very good, and readily available basis to select hosts automatically for fast result returns. The only thing have to be well balanced is the "rush" type workunits shouldn't reduce the hosts' RAC under the selection level. I can't recall if it has been suggested before, it's so obvious. Is this complicated to implement as well the other ideas? | |
ID: 19978 | Rating: 0 | rate: / Reply Quote | |
The task duration correction factor, found in Computer Details, may be key in the allocation of resends, and have potential use as a method of determining which systems to send long tasks to, but I don't know how easy any server side changes are to make as I have never worked on a Boinc server. I do work on servers though so I can understand reluctance to move too far from the normal installations and setup. Linux tends to be about as useful as NT4 when it comes to system updates, program/service installations and drivers. | |
ID: 19979 | Rating: 0 | rate: / Reply Quote | |
I think TDCF is likely to be a most unhelpful measure of past performance - not least because I have a suspicion that the current duration estimates (as defined by <rsc_fpops_est>) don't adequately reflect the work complexity of different task types, and the efficiency of processing of the ones, like GIANNI, which take advantage of the extra processing routines in the newer applications. | |
ID: 19980 | Rating: 0 | rate: / Reply Quote | |
a most unhelpful measure of past performance The newest system I added has a TDCF of 4.01 (GTX260 + GT240). My quad GT240 system’s TDCF is 3.54. The dual GTX470 system has a TDCF of 1.34. I think it’s working reasonably well (thanks mainly to the restrictions the scientists normally impose upon themselves) but GPUGrid is somewhat vulnerable to changes in hardware, observed and estimated run-times, mixed CPU usages (swan_sync on/off, free CPU or not, external CPU project usages), and changes in the app (via driver changes). | |
ID: 19991 | Rating: 0 | rate: / Reply Quote | |
s0r78-TONI_HIVMSMWO1-0-6-RND2382_1: 9,930.70 seconds | |
ID: 19992 | Rating: 0 | rate: / Reply Quote | |
Hmmmm. task 3445649 Can you report that in "Number crunching" please? thanks | |
ID: 19999 | Rating: 0 | rate: / Reply Quote | |
It seems that long WUs aren't afterall that much gain for everybody. We have dropped by 1000 WUs in progress in one single day. We may be trying to push the computations too far here. The last thing we want is to scare people out and run away from GPUGRID. I would suspect another reason is the new GPU app for Primegrid. Fermi cards are the performance leaders. No babysitting required. Credit is high, probably too high IMO. WUs are short. Most important, the project admins listen to the users and are very proactive about solving problems and considering user suggestions. | |
ID: 20023 | Rating: 0 | rate: / Reply Quote | |
Well, Since I have an older card, an 8800 GT, I would prefer short versus long. At this very moment, my 8800 GT is crunching a WU that looks like it will take 36 hours of run time, and the WU is not flagged "long." No way I can shut down my PC with this one and still gain the "quick return" credit. ____________ | |
ID: 20027 | Rating: 0 | rate: / Reply Quote | |
Message boards : News : Long WUs are out - 50% bonus