1) Message boards : Number crunching : Driver 359.00 -- GPU Missing (Message 42209)
Posted 8 days ago by Profile skgiven
The recommendation is to finish tasks before installing drivers and to restart the system afterwards.
Suggest you restart your computer again; Boinc likely started before the drivers finished updating when you rebooted for the first time.
Did you install Boinc as a service?
2) Message boards : GPUGRID CAFE : Creating lists does not work (for me...) (Message 42202)
Posted 9 days ago by Profile skgiven
Pre-format the list and use pre in square brackets [pre] and [ /pre] 'without the space'.
Not all of the BB code is used in the threads.
3) Message boards : GPUGRID CAFE : The milestone thread (Message 42200)
Posted 10 days ago by Profile skgiven
Congratulations Stroppy.
4) Message boards : GPUGRID CAFE : A Sad Goodbye (Message 42199)
Posted 10 days ago by Profile skgiven
Good luck Tomba!

Garard, please rethink and change the strategy. Suggestions have been made - you could simply have a priority queue and a secondary queue to ensure a constant flow of work, assuming drive space isn't as much of a problem these days.

The 'optimise towards turnover efficiency' strategy has the detrimental effect of putting crunchers off, thus reducing the participant number. In the short run this isn't noticeable, unless someone speaks up, but it's a long term factor. If people don't get work when they want it, they go elsewhere or just stop crunching.
5) Message boards : Number crunching : Long runs yielding less credits ? (Message 42198)
Posted 10 days ago by Profile skgiven
What I do find strange is that all WU types seem to require the same number of numeric operations, 5000000 GFLOPs.

That's because the project doesn't bother adjusting the GFlops for every new WU type. Boinc gets there in the end, mostly.
6) Message boards : Number crunching : JIT (Just In Time) or Queue and Wait? (Message 42176)
Posted 14 days ago by Profile skgiven
It's a misconception to think that you can't have a small cache of work without negatively impacting on the project. Crunchers who regularly return work all make a valuable contribution. This should never be misconstrued as slowing down the research - without the crunchers GPUGrid would not exist.

The turnaround problem stems from the research structure. Slow turnaround from crunchers is mostly down to crunchers who have smaller cards &/or don't crunch regularly. Many of these crunchers don't know what they are doing (Boinc settings) or the consequences of slow work return or having a cache.
Other issues such as crunching for other projects (task switching and priorities), bad work units, computer or Internet problems are significant factors too.

A solution might be to give optimal crunchers the most important work (if possible) and delegate perceived lesser work to those who return work more slowly/less reliably and to only send short tasks to slow crunchers. To some extent I believe this is being done.

If return time is critical to the project then credits should be based on return time and instead of having 2 or 3 cut-off points it should be a continual gradient from 200% down based on the fastest valid return, say 8h for example being reduced by 1% every half hour down to 1% for a WU returned after ~4.5days.
Would make the performance tables more relevant, add a bit of healthy competition and prevent people being harshly chastised for missing a bonus time by a few minutes (GTX750Ti on WDDM).
7) Message boards : Graphics cards (GPUs) : GPU Grid and Titan Z (Message 42174)
Posted 14 days ago by Profile skgiven
The application does not allow you to run one task on two GPU cores simultaneously. The app could be adapted to do so but for GPUGrid research via Boinc it's not suitable. Even if GPUGrid wanted to do this (and overall it would probably be slower), it would only work on some cards and not the cards for which it might be most useful; the GTX 750 and GTX750Ti for example cannot support Sli but the user (credits) and the project (turnaround) would be most likely to benefit. Things might change with Pascal.
8) Message boards : Frequently Asked Questions (FAQ) : FAQ - Recommended GPUs for GPUGrid crunching (Message 42173)
Posted 15 days ago by Profile skgiven
Generation Maxwell (GM) is now the recommended architecture for crunching at GPUGRID, due to performance/Watt and the research predominately being Single Precision.

Recommendations are grouped into High End, Mid-Range and Entry Level and by architecture or series:

Highly Recommended (all GM):

High End GTX Titan X, GTX 980Ti, GTX 980, GTX 970 Mid-Range GTX 960, GTX 950 Entry Level GTX 750Ti, GTX 750

Recommended (GeForce 700 series):
High End GTX Titan Z, GTX Titan Black, GTX 780Ti, Titan, GTX780, GTX 770, GTX 760Ti Mid-Range GTX 760, GTX 760 192-bit Entry Level GTX 740, GTX 730 (not the GF model)

Recommended (GeForce 600 series):

High End GTX 690, GTX 680, GTX 670, GTX 660Ti Mid-Range GTX 660, GTX 650Ti Boost, GTX 650Ti Entry Level GTX 650 (2GB only), GT 640 (Kepler) and GT 630 Rev.2.

Still work but poor performance/Watt:

Geforce GTX 590, 580, 570, 480, 560 Ti (448), 470, 465, 560 Ti, 460, 550 Ti Tesla* C2050/C2070 (Workstation), M2050/M2070, S2050 (Data Center) Quadro Desktop* Quadro 6000, 5000, 4000, 2000, 2000D

No longer work:
Geforce GTX 295, 285, 280, 275, 260-216 (55nm), Tesla10, Tesla20, 460SE, GTS450, 480M (CC2.0), Quadro Mobiles; 5010M, 5000, 4000, 3000, 2000, Quadro 600

Recommended Desktop OEM Cards:
GTX 960 (OEM), GTX 760 Ti, GTX 760, GeForce GTX 660 (OEM) - Two versions 1152:96:32 (256bit bus) and the lesser 1152:96:24 (192bit bus), GTX 645 (GK106) 576:48:16, GT 640 (GK107)

OEM GTX cards tend to be mid-range and GT cards are entry level cards, however the 760 Ti and 760 are high end.

GPUGrid is now CC2.0 or higher (CC3.0, CC2.1 or CC2.0).
Best to use GPU's with 2GB or more GDDR.
9) Message boards : Graphics cards (GPUs) : GPU performance chart (Message 42170)
Posted 16 days ago by Profile skgiven
I know there are few, but as well as Linux systems, Win XP and 2003 servers do not incur a WDDM overhead.
There are many factors that need to be considered:
Found differences in the WDDM overhead on Win 2008+ servers, 5.5 to~8% loss rather than ~11% loss; albeit a while ago. There’s probably not too many attached, however it could skew the results slightly.
Also noticed that the bigger (faster) the card the greater the WDDM overhead, as have others. This is probably a consequence of multiple other factors.
Observed a noticeable performance variation between DDR3 and DDR2 systems, and would expect some difference between DDR4 and DDR3 systems (especially with older RAM).
CPU architecture (what’s on the die [bus]) and frequency differences (2GHz vs 3.9GHz) are a factor too.
Different manufacturers/release versions/bins even of the one GPU cause some variation (usually via clock rates) - no 2 GPU's are identical.
Settings via software such as Afterburner can also impact performance, sometimes via temps (clock throttling) or GPU Bus.
PCIE width + Gen are a factor and again it’s more noticeable with bigger cards and exasperated by other factors; which are multiple (0.94*0.98*.96% performance = 0.88 performance, or a 12% loss).
Some people run 2 (or more) WU’s at a time on the one card. Does the stats account for this?
2 WU's are definitely run simultaneously on GTX Titan X cards – would not make sense to use them otherwise!
Other factors such as drive speed, what else is running on the system (HD video, defrags, drive scans or searches), CPU availability, SWAN_SYNC and Boinc settings can impact on performance. Occasionally drivers/app versions (CUDA dependent) bring improvements [speed or stability].
Some of these were exasperated by GPU architecture bottlenecks (bus/bandwidth); which is generation dependent.
Are mobile versions separate or not? Same architectures (mostly) but different clock settings and weaker hardware backing it up.
Downclocking (OS/driver or user), lots of suspend and resume due to user activity.
Some of the above variations vary themselves by WU type (GPU RAM, bus or CPU dependency especially) and everything is subject to app changes.

Boinc may still misrepresent all cards as being of GPU0 type (though I think it’s more complex than that), however the app knows better!

I see that the GTX750Ti is listed twice based on GDDR amount; 2048 and 2047MB. Ditto for the 960, 780Ti… (driver/OS?).
10) Message boards : Number crunching : JIT (Just In Time) or Queue and Wait? (Message 42030)
Posted 35 days ago by Profile skgiven
It does make sense, but there are issues with that approach which make it impractical.

Tasks take a long time to upload. ~6 to 10min for me (Europe), usually longer from US to Europe and for people on slower broadband connections.
I have 2 GPU's in each of two systems. Returning 2 WU's per GPU per day I would lose 50 to 80min of GPU crunching/day. For some people it could be several hours/day. Too much.

If a new task would not download until an existing task was at 90% that might be the happy medium.

Some people also run 2 WU's at a time on the same GPU. This increases overall throughput, but requires more tasks to be available.

Next 10