1) Message boards : Server and website : User of the day (Message 40657)
Posted 18 hours ago by Profile skgiven


Hoover mover :)

...Walk Of Life
2) Message boards : Server and website : User of the day (Message 40655)
Posted 19 hours ago by Profile skgiven
You didn't miss it; it is missing!
GPUGrid hasn't done user of the day for years IIRC.

So it appears that you have been randomly chosen (by a 'back-end' process) for Nothing - Congratulations! ;p

User of the day normally appears on the front page of Boinc projects and it's part of the Boinc web template.

This,
http://uotd.org/
3) Message boards : Graphics cards (GPUs) : Select GPU for F@H (Message 40654)
Posted 19 hours ago by Profile skgiven
GPUgrid is a Boinc project.
Folding@home is not a Boinc project :|
4) Message boards : Graphics cards (GPUs) : Big Maxwell GM2*0 (Message 40642)
Posted 1 day ago by Profile skgiven
Cooling 12GB appears to be a big issue and it's likely hindering boost and power consumption, but without the 12GB it wouldn't be a Titan, saying as it doesn't have good dp.
Hopefully a GTX980Ti with 6GB will appear soon - perhaps 2688 cuda cores, akin to the original Titan, but 35 to 40% faster for here and <£800, if not <$700 would make it an attractive alternative to the Titan X.

For the Titan X, excellent system cooling would be essential for long term crunching, given the high memory temps. I would go for direct air cooling on the back of one or possibly 2 cards, but if I had 3+ cards I would want a different setup. The DEVBOX's just use air cooling, with 2 large front case fans, but I suspect the memory still runs a bit hot. Just using liquid wouldn't be enough, unless it included back cooling. I guess a refrigerated thin-oil system would be ideal, but that would be DIY and Cha-Ching!

In my experience high GDDR temps effects performance of lesser cards too, and I found stability by cooling the back of some cards. While tools such as MSI Afterburner allow you to cool the GPU via the fans they don't even report the memory temps. It's often the situation that the top GPU (closest the the CPU) in an air cooled case runs hot. While this is partially from heat radiation it's mostly because the airflow over the back of the card comes from the CPU so it's already warm/hot. A basic CPU water cooler is sufficient to remove this issue, and for only about twice the cost of a good CPU heatsink and fan it's a lot cheaper than a GPU water cooler.
5) Message boards : Graphics cards (GPUs) : Select GPU for F@H (Message 40640)
Posted 1 day ago by Profile skgiven
Your title is "Select GPU for F@H" and now you are saying you want to use your GT630 for GPUgrid, "GT630 = GPUgrid".

Assuming you can actually use both (which might depend on the motherboard) you can exclude the A10-750K iGPU from being used by Boinc (or GPUGrid) by creating a cc_config.xml file.

It's possible that the discrete GPU will take precedence for games (motherboard/Bios) and you wont be able to do anything about that.

You might need the monitor to be plugged in to the iGPU to have any chance.
The use of iGPU's has been discussed before, try searching for it.
6) Message boards : Graphics cards (GPUs) : Big Maxwell GM2*0 (Message 40635)
Posted 1 day ago by Profile skgiven
Roughly 56% faster than a Titan,

https://www.gpugrid.net/forum_thread.php?id=1150

What are the boost clocks?

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-titan-x-gm200-maxwell,4091-6.html
Suggests it boosts to 1190MHz but quickly drops to 1163MHz (during stress tests) and that it's still dropping or increasing in steps of 13MHz, as expected.
1190 is ~15% shy of where I can get my GTX970 to boost, so I expect it will boost higher here (maybe ~1242 or 1255MHz).
Apparently the back gets very hot. I've dealt with this in the past by blowing a air directly onto the back of a GPU and by using a CPU water cooler (as that reduces radiating heat).
7) Message boards : Graphics cards (GPUs) : NVidia GPU Card comparisons in GFLOPS peak (Message 40634)
Posted 1 day ago by Profile skgiven
Updated with the GTX Titan X (estimated values based on limited results):

    Performance GPU Power GPUGrid Performance/Watt 211% GTX Titan Z (both GPUs) 375W 141% 156% GTX Titan X 250W 156% 116% GTX 690 (both GPUs) 300W 97% 114% GTX Titan Black 250W 114% 112% GTX 780Ti 250W 112% 109% GTX 980 165W 165% 100% GTX Titan 250W 100% 93% GTX 970 145W 160% 90% GTX 780 250W 90% 77% GTX 770 230W 84% 74% GTX 680 195W 95% 64% GTX 960 120W 134% 59% GTX 670 170W 87% 55% GTX 660Ti 150W 92% 53% GTX 760 130W 102% 51% GTX 660 140W 91% 47% GTX 750Ti 60W 196% 43% GTX 650TiBoost 134W 80% 37% GTX 750 55W 168% 33% GTX 650Ti 110W 75%

Throughput performances and Performances/Watt are relative to a GTX Titan.
Note that these are estimates and that I’ve presumed Power to be the TDP as most cards boost to around that, for at least some tasks here.
I don’t have a full range or cards to test against every app version or OS so some of this is based on presumptions based on consistent range observations of other cards. I’ve never had a GTX750Ti, GTX750, 690, 780, 780Ti or any of the Titan range to compare, but I have read what others report. While I could have simply listed the GFLOPS/Watt for each card that would only be theoretical and ignores discussed bottlenecks (for here) such as the MCU load, which differs by series.

The GTX900 series cards can be tuned A LOT - either for maximum throughput or less power usage / coolness / performance per Watt:
For example, with a GTX970 at ~108% TDP (157W) I can run @1342MHz GPU and 3600MHz GDDR or at ~60% TDP (87W) I can run at ~1050MHz and 3000MHz GDDR, 1.006V (175W at the wall with an i7 crunching CPU work on 6 cores).
The former does more work, is ~9% faster than stock.
The latter is more energy efficient, uses 60% stock power but does ~ 16% less work than stock or ~25% less than with OC'ed settings.
At 60% power but ~84% performance the 970 would be 34% more efficient in terms of performance/Watt. On the above table that would be ~214% the performance/Watt efficiency of a Titan.

I expected the 750Ti and 750 Maxwell's also boost further/use more power than their reference specs suggest, but Beyond pointed out that although they do auto-boost they don't use any more power for here (60W). It's likely that they can also be underclocked for better performance/Watt, coolness or to use less power.

The GTX960 should also be very adaptable towards throughput or performance/Watt.

PM me with errors/ corrections.

8) Message boards : Graphics cards (GPUs) : how to fix low memory clocks on GM204 cards (Message 40631)
Posted 1 day ago by Profile skgiven
GFlops is a theoretical maximum expressed against double precision or in the case of here single precision (x16 will be added to Volta).
GPU's have different but fixed architectures, so actual GPU performance depends on how the app and task expose architectural weaknesses (bottlenecks); what it has been asked to do and how it has to do it. Different architectures are relatively better or worse at doing different things.

WRT NVidia, GFlops is a reasonably accurate way of comparing cards performances within a series (or two based on the same architecture) as it's based on what they are theoretically capable of (which can be calculated). However, there are other 'factors' which have to be considered when looking at performance running a specific app.

As MrS said, by calculating and applying these 'correction factors' against compute capabilities we were able to compare performances of cards from different generations, up to and including Fermi. With Kepler we saw a greater variety of architectures within the two series so there were additional factors to consider.

Differences in bandwidth, boost, cache size, memory rates and memory size came to the fore as important considerations when comparing Kepler cards/thinking about buying one to crunch here - these impacted upon actual performance.
Cache size variation seemed to be important, with larger cache sizes being less restrictive.
Same type cards boosted to around about the same speed, irrespective of the price tag or number of fans.
Some cards in a series were even from a different generation GF not GK.
Bandwidth was lower for some Kepler's and was a significant impedance varying with WU types. This is still the case with Maxwell's; some WU's require more bandwidth than others. So for example, running one WU type (say a NOELIA_pnpx) on a GTX970 might incur a 49% Memory Controller Load and a 56% MCL on a GTX980 whereas other WU's would only incur a 26% MCL on a GTX970 and a 30% MCL on a GTX980. In the latter case a GTX980 might for the sake of argument appear to be 19% faster than a GTX970 whereas with the NOELIA WU's the GTX970's performance would be slightly better relatively; with the GTX980 only 15% faster than a GTX970. Increasing memory rates (to what they are supposed to be) alleviates the MCL but it's not as noticeable if the MCL is low to begin with.
The GDDR5 usage I'm seeing from a NOELIA_pnpx task is 1.144GB, so it's not going to perform well on a 1GB card!

Comparing NVidia cards to AMD's ATI range based on GFlops is almost pointless without consideration to the app. Equally pointless is comparing apps that use different features; OpenCL vs CUDA.
9) Message boards : Graphics cards (GPUs) : how to fix low memory clocks on GM204 cards (Message 40606)
Posted 3 days ago by Profile skgiven
The GTX580 is GF110 not GM204.
Performance is not the same thing as GFlops.
10) Message boards : Number crunching : Tip: How to obtain zero GPU buffer size and normal CPU buffer size (Message 40595)
Posted 4 days ago by Profile skgiven
There is another way to do this.
Create two cc_config files in two different directories, one to disable GPU 0 and the other to enable GPU 0 again, and have two batch files on your desktop; one to disable GPU 0 in Boinc prior to gaming and the other to enable it again. Just double-click on the 'DisableGPU0' batch file to disable GPU0 prior to gaming and double-click 'EnableGPU0' to enable it afterwards.

See the following link,

https://www.gpugrid.net/forum_thread.php?id=3738&nowrap=true#36684


Next 10