1) Message boards : News : WU: OPM simulations (Message 43613)
Posted 2 days ago by Profile Retvari Zoltan
FWIW the ever increasing error rate is why I no longer crunch here. Hours of wasted time and electricity could be better put to use elsewhere like POEM. My 970s are pretty much useless here nowadays and the 750TIs are completely useless. JMHO
According to the performance page the GTX 970 is a pretty productive GPU:

It suggests that the reason for the increasing error rate you are experiencing lies at your end.
The most probable cause of this is too much overclocking, inadequate cooling or PSU. GPUGrid is more demanding than other GPU projects, so the system/settings which work for other projects could be inappropriate for GPUGrid tasks. In some cases factory overclocked cards will not work here until their factory default frequency is reduced.
2) Message boards : News : WU: OPM995 simulations (Message 43602)
Posted 2 days ago by Profile Retvari Zoltan
Thanks Stefan!
As there is plenty of workunits queued (7920 atm), and some of these are very long I suggest everyone to reduce their work cache to 0.03 days to maximize throughput & the credits earned.
3) Message boards : News : WU: OPM simulations (Message 43590)
Posted 3 days ago by Profile Retvari Zoltan
I haven't tried this, but theoretically it should work.

What theory is that? It isn't a defined field, according to the Application configuration documentation.

Oh, my bad!
That won't work...
I read a couple of post about this somewhere, but I've clearly messed it up.
Sorry!
Sk, Could you hide that post please?
4) Message boards : News : WU: OPM simulations (Message 43583)
Posted 3 days ago by Profile Retvari Zoltan


A simulation containing only 35632 atoms is a piece of cake.
5) Message boards : Graphics cards (GPUs) : New drivers NVIDIA GPU's (Message 43572)
Posted 4 days ago by Profile Retvari Zoltan
One thing I can hopefully advise on correctly is that if you try to update straight onto new hardware (say you replace the HDD with a SSD and try to use the W7/W8.1 key) it will likely fail and you may need to reactivate your W7/8.1 key again before trying again. Better to keep existing hardware, update and activate W10 before changing the hardware (changing a drive might require you to install W10 again, but once W10 has been activated you can do this as many times as you need).
Theoretically the 2nd release of Windows 10 (November 2015. aka v1511) accepts the product key of the corresponding previous versions, but only until the free upgrade period ends.
6) Message boards : News : WU: OPM simulations (Message 43567)
Posted 4 days ago by Profile Retvari Zoltan
It would be hugely appreciated if you could find a way of hooking up the projections of that script to the <rsc_fpops_est> field of the associated workunits. With the BOINC server version in use here, a single mis-estimated task (I have one which has been running for 29 hours already) can mess up the BOINC client's scheduling - for other projects, as well as this one - for the next couple of weeks.
+1
Could you please set the <rsc_fpops_est> field and the <rsc_disk_bound> field correctly for the new tasks?
The <rsc_disk_bound> is set to 8*10^9 (7.45GB) which is at least one order of magnitude higher then necessary.
7) Message boards : News : WU: OPM simulations (Message 43563)
Posted 4 days ago by Profile Retvari Zoltan
I know my opinions aren't liked very much here
That should not ever make you to refrain from expressing your opinion.

but I wanted to express my response to these 5 proposed "minimum requirements".
It was a mistake to call these "Minimum requirements", and it's intended for dummies. Perhaps that's makes itself unavailing.

These "minimum requirements" are... not great suggestions, for someone like me at least. I realize I'm an edge case. But I'd imagine that lots of people would take issue with at least a couple of the 5.
If you keep an eye on your results, you can safely skip these "recommendations". We can, we should refine these recommendations to make them more appropriate, and less offensive. I've made the wording of these harsh on purpose to induce a debate. But I can show you results or hosts which validate my 5 points. (just browse the links in my post about the workunit with the worst history I've ever received, and the other similar ones)
The the recommended minimum GPU should be better than the recent (~GTX 750-GTX 660), as the release of the new GTX 10x0 series will result in longer workunits by the end of this year, and the project should not lure new users with lesser cards to frustrate them in 6 months.
8) Message boards : News : WU: OPM simulations (Message 43556)
Posted 4 days ago by Profile Retvari Zoltan
Ok, Gianni changed the baseline WUs available per GPU per day from 50 to 10
Thanks!
EDIT: I don't see any change yet on my hosts' max number of tasks per day...
9) Message boards : Graphics cards (GPUs) : New drivers NVIDIA GPU's (Message 43553)
Posted 4 days ago by Profile Retvari Zoltan
I've already reported this to Nvidia, but can any of you suggest what to do to make that computer run properly again?
You should try to press F8 (repeatedly) at system startup, right after the BIOS passes the boot to the OS.
Then you should select the "Enable VGA mode" or something similar.
10) Message boards : News : WU: OPM simulations (Message 43551)
Posted 4 days ago by Profile Retvari Zoltan
The thing with excluding bad hosts is unfortunately not doable as the queuing system of BOINC apparently is pretty stupid and would exclude all of Gerard's WUs until all of mine finished if I send them with high priority :(
I recall that there was a "blacklist" of hosts in the GTX480-GTX580 era. Once my host got blacklisted upon the release of the CUDA4.2 app, as this app was much faster then the previous CUDA3.1, so the cards could be overclocked less and my hosts began to throw errors until I've reduced its clock frequency. It could not get tasks for 24 hours IIRC. However it seems that later when the BOINC server software was updated at GPUGrid this "blacklist" feature disappeared. It would be nice to have this feature again.


Next 10