1) Message boards : News : App update 17 April 2017 (Message 47920)
Posted 2395 days ago by Profile skgiven
Don't know what the situation is regarding a new app but if an OpenMM app was compiled using VS 2010, in theory it might still work on XP. The latest VS version doesn't support XP IIRC. Other factors (CUDA Drivers, a mainstream Volta release) might still prevent it working and if the WDDM overhead isn't an issue with OpenMM then supporting XP might be unnecessary.
2) Message boards : Number crunching : SWAN_SYNC Enable/Disable (Message 47728)
Posted 2450 days ago by Profile skgiven
Advanced search SWAN_SYNC

FAQ - Best configurations for GPUGRID


Keyboard skills tip for the day:

    Ctrl + f


Enter text, "SWAN_SYNC" and find the posts you are looking for in the thread

3) Message boards : News : App update 17 April 2017 (Message 47718)
Posted 2453 days ago by Profile skgiven
Assuming an OpenMM based Acemd3 app turns up, all & any user side tweaks, modifications & optimizations will need to be reassessed, primarily to see if they work. Coolbits, nice, SWAN_SYNC, process lasso, priority, HT ON/OFF, freeing up a CPU/thread...
Which GPU models will work best/are the best value would need to be reassessed. Some may no longer work, some might be better...
The importance of PCIE & CPU rates/strengths/weaknesses might change as might the operating systems we can use...
4) Message boards : Number crunching : Question re rebooting during an active Work Unit in Progress (Message 47223)
Posted 2529 days ago by Profile skgiven
Select no new work from the project. Allow existing work to complete, then update+restart before accepting work from the project again.
5) Message boards : Graphics cards (GPUs) : Mixing GTX 10X0 GPUs with older GPUs in the same system (Message 47033)
Posted 2555 days ago by Profile skgiven
The new app allows for mixing of new and older generations of GPU's.
For example a GTX970 can now be used alongside a GTX1060 is a Windows 10 system.

The thread called DO NOT MIX GTX 10X0 GPU with older GPUs in the same system has been retired.

Will move the last few posts to here...
6) Message boards : Number crunching : Where did all my task history go? (Message 46597)
Posted 2599 days ago by Profile skgiven
It sure would be nice if they purged all the outdated error tasks from 2014-2016.


Lets not forget the cuda 5.5 errors from 2013:

http://www.gpugrid.net/results.php?hostid=139265&offset=0&show_names=0&state=5&appid=

7418062 4883298 31 Oct 2013 | 13:17:36 UTC 31 Oct 2013 | 15:14:52 UTC Error while computing 2.21 0.20 --- Short runs (2-3 hours on fastest card) v8.14 (cuda55)
7417225 4883425 31 Oct 2013 | 8:59:24 UTC 31 Oct 2013 | 9:43:26 UTC Error while computing 2.21 0.20 --- Short runs (2-3 hours on fastest card) v8.14 (cuda55)
7414821 4883195 30 Oct 2013 | 23:57:35 UTC 31 Oct 2013 | 0:06:05 UTC Error while computing 2.17 0.17 --- Short runs (2-3 hours on fastest card) v8.14 (cuda55)
7413026 4883662 31 Oct 2013 | 12:54:58 UTC 31 Oct 2013 | 12:57:13 UTC Error while computing 2.32 0.20 --- Short runs (2-3 hours on fastest card) v8.14 (cuda55)
7413016 4883652 31 Oct 2013 | 9:43:26 UTC 31 Oct 2013 | 9:44:51 UTC Error while computing 2.40 0.19 --- Short runs (2-3 hours on fastest card) v8.14 (cuda55)
7412905 4883541 31 Oct 2013 | 1:22:10 UTC 31 Oct 2013 | 3:18:05 UTC Error while computing 2.21 0.17 --- Short runs (2-3 hours on fastest card) v8.14 (cuda55)
7412626 4883263 30 Oct 2013 | 21:15:39 UTC 30 Oct 2013 | 21:23:58 UTC Error while computing 2.25 0.19 --- Short runs (2-3 hours on fastest card) v8.14 (cuda55)
7247115 4751509 4 Sep 2013 | 10:12:01 UTC 4 Sep 2013 | 13:53:56 UTC Error while computing 6,407.73 1,130.65 --- ACEMD beta version v8.11 (cuda55)
7247113 4751587 4 Sep 2013 | 10:12:01 UTC 4 Sep 2013 | 12:07:05 UTC Error while computing 6,408.04 1,214.25 --- ACEMD beta version v8.11 (cuda55)
7202422 4715242 24 Aug 2013 | 12:51:21 UTC 24 Aug 2013 | 13:39:31 UTC Error while computing 2,467.68 2,360.33 --- ACEMD beta version v8.00 (cuda55)
7) Message boards : Number crunching : Power outage destroyed results (Message 46441)
Posted 2628 days ago by Profile skgiven
Can I move the BOINC directory to another drive? It's currently in my C drive and I don't want decreased performance.

Yes. I use smallish SSD's for most of my systems but on my W10 system the Boinc Projects (app/project data) folder is on a HDD. Basically, cut and paste the Boinc directory to the second drive (maybe backing it up along the way) and reinstall Boinc, pointing to the new directory -> the secondary drive. The executable will still be on the primary drive but all the reading and writing will be done to the secondary drive. Obviously disable the caching on the secondary drive!
8) Message boards : Number crunching : 1 Work Unit Per GPU (Message 46381)
Posted 2635 days ago by Profile skgiven
At present there are not enough tasks to go around & most/all WU's have high GPU utilization so the project doesn't benefit (overall) from even high end GPU's running 2tasks at a time. Considering the recent upload/server availability issues it makes even less sense to run more than 1task/GPU. I would even go so far as to suggest its time we were running 1 task across multiple GPU's (for those who have them) - something that would also benefit those with 2 or more smaller GPU's; assuming an NV-link/SLi isn't required. By the way, GPU utilization isn't the end all be all of GPU usage; apps that perform more complex simulations, that require more GDDR, and more PCIE bandwidth are just using the GPU differently; not more or less (one probably comes at the expense of the other).

When available tasks are exclusively low GPU-utilizing tasks, only then could the project benefit from high-end GPU's running 2tasks simultaneously.

PS. The Pascal app (cuda80) is different from the previous app (cuda65) which can run mixed GPU setups; the latest app is exclusively for Pascal's (GTX 1000 series).
9) Message boards : Number crunching : Gridcoin (Message 46380)
Posted 2635 days ago by Profile skgiven
It's not a useful e-currency ATM & until you convert it into one (bitcoins) it's of theoretical value only. Also, it's value against the bitcoin is steadily falling, & there are very few conversions; not many people want to buy Gridcoins.
10) Message boards : News : WU: BNBS (Message 46379)
Posted 2635 days ago by Profile skgiven
Well, even my PCIe2x16 conncected gtx1080 runs at pretty 95% per GPU-Z... which is a very pleasing figure. The BNBS accommodate the fast Pascal I presume and reduces the WDDM influence. Anyway, I am happy with the throughput.


Exactly. There's unusually little interaction between the CPU and the GPU by these workunits.

For comparison, I'm seeing 40% PCIe usage on my GTX1060-3GB running a PABLO_adaptive task (with the GDDR@8GHz & GPU at ~1950MHz on Linux; ~95% GPU Utilization. PCIe2 x16, while only running 1 CPU task on a tri-core AMD).

Well it takes him 11 days to complete a BNBS2, now it should take slightly over 24 hours, so over 10 times as fast. I like to exaggerate as well :)

If it takes over 5days there really isn't much point 'contributing'.

I'm sure it must be especially frustrating to you when completed tasks are queued up on volunteers' computers because the server isn't in a position to accept the uploads and/or reports, and then again because the BOINC clients have gone into an extended backoff because of the connection failures.

I've just manually retried and successfully reported two tasks which had completed overnight, and got replacement tasks on both machines: at least they started running almost immediately, so not too much time was lost.

The continuation of this problem calls into question the credit/bonus system, unless of course you don't value it much/at all. When crunchers complete work quickly but can't upload & report they miss the bonus because the server isn't available, & BM backs off the retries. It's not the fault of the cruncher, doesn't help the project get through work, or retain crunchers! GPUGrid Stats regarding the fastest GPU's/systems/contributors become fickle, further frustrating some crunchers.

But don't worry, there's only 2.5min to go!


Next 10
//