1) Message boards : Number crunching : About the GERARD_A2AR batch (Message 42758)
Posted 21 hours ago by Profile skgiven
The science will determine the reliance on the CPU. That said, this is a GPU project and tries to do most of the work on the GPU, and the project is designed to utilize gaming GPU's.

The problem identified in the OP wasn't with the CPU, it was (mostly) a PCIE x4 bottleneck. That said, the PCIE controller is on the CPU & the CPU does have to do some work. So there isn't an inherent need for another CPU (socket 2 on a dual CPU board) and if there was, then PCIE controllers would need to be separated across both CPU's.

Perhaps the only way for WU's to avoid the WDDM overhead in recent MS OS's would be to use on-GPU CPU's. If this ever does become a reality then the co-processor would get a dedicated 'main' processor (a developmental flip). For now there is still XP, 2003/2003R2 server and Linux.
2) Message boards : Graphics cards (GPUs) : 600W PSU for 2 GPU System (Message 42756)
Posted 23 hours ago by Profile skgiven
145W TDP for the GTX970 + 60W TDP for the GTX750Ti is 205W.
Your CPU has a stock TDP of 125W and it's reasonable to expect it would use close to that if you use it to crunch.
Other system components typically use ~50W, so your system is likely using around about 380W if you run at stock/reference values.

That Corsair (CX600) is good for up to 480W on it's main 12V rail. Your disks will probably come off the 3.3V and 5V rail.

The PSU has a MTBF (Mean Time Before Failure) of 100Khours (11years) and comes with a 2year RTM warranty.

It's a Bronze level 'Builder Series' model, and it would in my opinion be reasonable to expect it to do the job, however I tend to use higher end models of PSU as they are more reliable and more power efficient (and more cost efficient for me in the long run).

If you are thinking of upgrading (say because you've had the PSU for 2years) the RM series are great value for money. Even at 115V you would get close to 90% power efficiency using 380W on an RM750i and the RM series warranty is longer 5 or 7 years.

Most PSU's are efficiency optimized for ~50% usage.
There are other good PSU manufacturers (but not many)!
3) Message boards : Number crunching : Driver 359.00 -- GPU Missing (Message 42209)
Posted 76 days ago by Profile skgiven
The recommendation is to finish tasks before installing drivers and to restart the system afterwards.
Suggest you restart your computer again; Boinc likely started before the drivers finished updating when you rebooted for the first time.
Did you install Boinc as a service?
4) Message boards : GPUGRID CAFE : Creating lists does not work (for me...) (Message 42202)
Posted 76 days ago by Profile skgiven
Pre-format the list and use pre in square brackets [pre] and [ /pre] 'without the space'.
Not all of the BB code is used in the threads.
5) Message boards : GPUGRID CAFE : The milestone thread (Message 42200)
Posted 77 days ago by Profile skgiven
Congratulations Stroppy.
6) Message boards : GPUGRID CAFE : A Sad Goodbye (Message 42199)
Posted 77 days ago by Profile skgiven
Good luck Tomba!

Garard, please rethink and change the strategy. Suggestions have been made - you could simply have a priority queue and a secondary queue to ensure a constant flow of work, assuming drive space isn't as much of a problem these days.

The 'optimise towards turnover efficiency' strategy has the detrimental effect of putting crunchers off, thus reducing the participant number. In the short run this isn't noticeable, unless someone speaks up, but it's a long term factor. If people don't get work when they want it, they go elsewhere or just stop crunching.
7) Message boards : Number crunching : Long runs yielding less credits ? (Message 42198)
Posted 77 days ago by Profile skgiven
What I do find strange is that all WU types seem to require the same number of numeric operations, 5000000 GFLOPs.

That's because the project doesn't bother adjusting the GFlops for every new WU type. Boinc gets there in the end, mostly.
8) Message boards : Number crunching : JIT (Just In Time) or Queue and Wait? (Message 42176)
Posted 81 days ago by Profile skgiven
It's a misconception to think that you can't have a small cache of work without negatively impacting on the project. Crunchers who regularly return work all make a valuable contribution. This should never be misconstrued as slowing down the research - without the crunchers GPUGrid would not exist.

The turnaround problem stems from the research structure. Slow turnaround from crunchers is mostly down to crunchers who have smaller cards &/or don't crunch regularly. Many of these crunchers don't know what they are doing (Boinc settings) or the consequences of slow work return or having a cache.
Other issues such as crunching for other projects (task switching and priorities), bad work units, computer or Internet problems are significant factors too.

A solution might be to give optimal crunchers the most important work (if possible) and delegate perceived lesser work to those who return work more slowly/less reliably and to only send short tasks to slow crunchers. To some extent I believe this is being done.

If return time is critical to the project then credits should be based on return time and instead of having 2 or 3 cut-off points it should be a continual gradient from 200% down based on the fastest valid return, say 8h for example being reduced by 1% every half hour down to 1% for a WU returned after ~4.5days.
Would make the performance tables more relevant, add a bit of healthy competition and prevent people being harshly chastised for missing a bonus time by a few minutes (GTX750Ti on WDDM).
9) Message boards : Graphics cards (GPUs) : GPU Grid and Titan Z (Message 42174)
Posted 82 days ago by Profile skgiven
The application does not allow you to run one task on two GPU cores simultaneously. The app could be adapted to do so but for GPUGrid research via Boinc it's not suitable. Even if GPUGrid wanted to do this (and overall it would probably be slower), it would only work on some cards and not the cards for which it might be most useful; the GTX 750 and GTX750Ti for example cannot support Sli but the user (credits) and the project (turnaround) would be most likely to benefit. Things might change with Pascal.
10) Message boards : Frequently Asked Questions (FAQ) : FAQ - Recommended GPUs for GPUGrid crunching (Message 42173)
Posted 82 days ago by Profile skgiven
Generation Maxwell (GM) is now the recommended architecture for crunching at GPUGRID, due to performance/Watt and the research predominately being Single Precision.

Recommendations are grouped into High End, Mid-Range and Entry Level and by architecture or series:

Highly Recommended (all GM):

High End GTX Titan X, GTX 980Ti, GTX 980, GTX 970 Mid-Range GTX 960, GTX 950 Entry Level GTX 750Ti, GTX 750

Recommended (GeForce 700 series):
High End GTX Titan Z, GTX Titan Black, GTX 780Ti, Titan, GTX780, GTX 770, GTX 760Ti Mid-Range GTX 760, GTX 760 192-bit Entry Level GTX 740, GTX 730 (not the GF model)

Recommended (GeForce 600 series):

High End GTX 690, GTX 680, GTX 670, GTX 660Ti Mid-Range GTX 660, GTX 650Ti Boost, GTX 650Ti Entry Level GTX 650 (2GB only), GT 640 (Kepler) and GT 630 Rev.2.

Still work but poor performance/Watt:

Geforce GTX 590, 580, 570, 480, 560 Ti (448), 470, 465, 560 Ti, 460, 550 Ti Tesla* C2050/C2070 (Workstation), M2050/M2070, S2050 (Data Center) Quadro Desktop* Quadro 6000, 5000, 4000, 2000, 2000D

No longer work:
Geforce GTX 295, 285, 280, 275, 260-216 (55nm), Tesla10, Tesla20, 460SE, GTS450, 480M (CC2.0), Quadro Mobiles; 5010M, 5000, 4000, 3000, 2000, Quadro 600

Recommended Desktop OEM Cards:
GTX 960 (OEM), GTX 760 Ti, GTX 760, GeForce GTX 660 (OEM) - Two versions 1152:96:32 (256bit bus) and the lesser 1152:96:24 (192bit bus), GTX 645 (GK106) 576:48:16, GT 640 (GK107)

OEM GTX cards tend to be mid-range and GT cards are entry level cards, however the 760 Ti and 760 are high end.

GPUGrid is now CC2.0 or higher (CC3.0, CC2.1 or CC2.0).
Best to use GPU's with 2GB or more GDDR.

Next 10