11) Message boards : Number crunching : Remote disable WU (Message 46300)
Posted 2647 days ago by Profile skgiven
No. Change your password!
12) Message boards : Graphics cards (GPUs) : Temperatures (Message 46283)
Posted 2648 days ago by Profile skgiven
From a terminal,
    sudo gedit /etc/X11/xorg.conf


Scroll down the xorg.conf file until you see, Section "Screen" and add, Option "Coolbits" "12"

For example,

Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "coolbits" "12" SubSection "Display" Depth 24 EndSubSection EndSection

Save the config file and restart.

On restarting open NVIDIA X Server Settings
Beneath your GPU (GPU0), select PowerMizer
To reduce the GPU clock by 96MHz under Editable Performance Levels, Graphics Clock Offset enter -96. Similarly, to reduce the memory transfer clock by 100MHz enter -100.
To set an audibly acceptable GPU fan speed click on Thermal Settings, Enable GPU Fan Settings and set the fan at something sensible (probably 60% or more) to test it. Keep an eye on the GPU Temperature, and adjust accordingly; so that it does not go too high. Up to 70C is usually fine (if not there's likely a problem with the GPU), if it's above that but below 80C adjust your settings or at least keep an eye on the temp and performance (look out for failures/system issues). If it's over 80C you could increase the fan speed further &/or reduce the GPU clock & memory clock further, testing as you go.

Note that you need to reapply settings after restarting or create an .sh file and enter the settings and set them to run at startup. For multiple GPU's you need to add coolbits for each GPU (under a screen) & you might need specific drivers (375.20). The above is what I'm using for one GPU with 370.28 drivers. Didn't get anywhere with nvclock - might be defunct with 16.04.

Your NV settings can be added to a .sh file, which can be set as an executable and added to the startup list:
Right click on your desktop and Create a New Document, Empty Document and call it nv.sh (must end in .sh).
Past in the following values (note these are for underclocking the GPU & memory and setting the fan), save and close the file,

!/bin/bash

nvidia-settings -a '[gpu:0]/GPUGraphicsClockOffset[3]=-96'

nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=-100'

nvidia-settings -a '[gpu:0]/GPUFanControlState=1'

nvidia-settings -a '[fan:0]/GPUTargetFanSpeed=60'

Right click on the nv.sh file and select Properties. Under the Permissions tab select Allow executing file as program and close it.
Search your PC for Startup Applications and then Add the nv.sh file to the list (located on the desktop):
    Name: nv.sh
    Command: /home/'username'/Desktop/nv.sh
    Comment: SetGPUandFanSpeeds


The settings will be applied automatically when the system starts up.

13) Message boards : Graphics cards (GPUs) : GTX 10x0 Under utilised in GPU and RAM (Message 46023)
Posted 2671 days ago by Profile skgiven
There is high GPU utilization with the BNBS tasks (98% on Linux) and they are less CPU restrained, reasonably high utilization with the short SDOEER_CASP22S tasks (91% on Linux/W10), but some of the others are closer to 80% and suffer more performance loss when the CPU is being used elsewhere. If you are getting significantly less than 80% then I suggest you try reducing your CPU usage as that's likely interfering with GPU performance. GL
14) Message boards : Graphics cards (GPUs) : Long runs have too short deadline (Message 46011)
Posted 2672 days ago by Profile skgiven
Requesting a constant supply of short tasks to greater facilitate the masses; when you're enhancing the project management of your 'free' DC-supper computer. We can't expect people to fork out £300 for mid-range GPU's, never mind £650 high end GPU's, only to encounter work shortages. Note my position on the price of the Pascals (it's too hight ATM) and position regarding system builds for here/crunching in general (wait until the competition is better).
15) Message boards : Number crunching : BOINC Supercomputer (Message 46010)
Posted 2672 days ago by Profile skgiven
Firstly, Boinc is not a supercomputer!
Nor does 'Boinc' have a supercomputer. It's a tool for a very broad spectrum of (and I use the word very loosely) 'scientific' research. People don't connect to Boinc, they use Boinc to connect to many different research projects. Sometimes their only affiliation/commonality is their use of Boinc.

Frankly I wish all that computing power would exclusively go to projects like GPUGRID, Folding@Home, WCG or Rosetta. Those alleviate suffering and move mankind more forward than e.g. decoding meaningless Enigma transmissions from WW2.

Just my two Cents. So may the shitstorm fall on me.

People chose to crunch for numerous different projects for combinations of many different reasons; some logical, some irrational, some sad, some funny... IMO you're ~99% correct, but I could be wrong, right or somewhere in between. What I've learnt from experience is that some of the less thought of projects have helped push the boundary's of what Boinc can do/facilitates. Indirectly, the lesser accepted projects have helped enhance Boinc as a research tool and just because you don't think highly of a project doesn't mean you can't learn from their mistakes or take advantage of their success. If person A didn't develop a tool to allow distributed computers to help search for aliens then it's likely someone else would have. I've learnt a lot form other Boinc projects, especially their forums (so it's there for all to access). TN-Grid Platform is probably a project you've never heard of, but they are doing some good cutting edge CPU research. It's not as big as here & their science isn't readily accessible (you need to be well read) but it's not to be sniffed at. Back in Oct they were interested in some help porting their app to cuda (G, M, T, Dr's...).
16) Message boards : News : New server is running! (Message 46008)
Posted 2672 days ago by Profile skgiven
I have 10 days of queued work specified, but more often than not, I have a GPU sitting idle for lack of work.


It's recommended for here that you keep a very low cache of queued work. 10days is asking for lots of trouble. Many CPU tasks will end up running in frantic mode when they approach their deadline and prevent GPU tasks running (as GPU tasks still use the CPU to some extent). Any MT tasks you might have could kill the show.
17) Message boards : Graphics cards (GPUs) : 970 vs 780ti (Message 46006)
Posted 2672 days ago by Profile skgiven
The 780Ti was ~20% faster the last time I looked.

NVidia GPU Card comparisons in GFLOPS peak

The biggest issues ATM here WRT SP GFlops is that the apps/tasks don't scale well on the 'bigger' GPU's, and performance varies greatly by setup and task type - making it very hard to compare multiple generations of GPU's. When comparing a 970 with a 1060-3GB the swing can be 5% for one GPU to 20% for the other depending on task type. There are other big factors too such as boost; recent generations boost a lot and various factory tweaked versions make things even more tricky to compare. The out of the box performance variation is likely to be >5% for various GTX1060-6GB's for example.
18) Message boards : Graphics cards (GPUs) : Long runs have too short deadline (Message 46004)
Posted 2672 days ago by Profile skgiven
Tasks are not always the same size, and sometimes they end up in the wrong queue by mistake.

Use two profiles:

Home
670 - select short tasks and other work if short tasks are not available (If no work for selected applications is available, accept work from other applications?) - if you want to crunch long tasks when short tasks are not available, otherwise don't select this option.

Work
980's - select long tasks and other work (short tasks) if Long tasks are not available.

Also, have GPUGrid Resource share set high (10000) and have one or more GPU backup projects with low resource shares; (10), (1), (0).

Note that you can also exclude apps for different projects against a specific GPU in cc_config.xml or disable a GPU:

    <exclude_gpu>
    <url>http://albert.phys.uwm.edu/</url>
    <device_num>0</device_num>
    <app>einsteinbinary_BRP4</app>
    </exclude_gpu>


    <exclude_gpu>
    <url>http://www.gpugrid.net/</url>
    <device_num>1</device_num>
    </exclude_gpu>

19) Message boards : Graphics cards (GPUs) : 8 GPUs in one system (Message 46001)
Posted 2672 days ago by Profile skgiven
In theory, yes. In practice, it's not presently feasible (for here).
While there might be issues running 8 GPU's (boards, case, PSU, cooling), especially on some OS's & while 7 might be more practicable, in theory up to ~16 GPU's are possible, but that's not the real issue. The last dual 'GeForce' GPU of note was a GeForce GTX Titan Z (GK110) [375W], from the GeForce 700 series. As the 1080 can theoretically do as much SP work using 180W, you're not likely to opt for a two generation old GK110 for this project. Other issues include high-end/high-cost system requirements and there are multi-GPU scaling issues. Even with 40 PCIE lanes, high end systems struggle to support 4 high end GPU's.
There isn't a dual Quadro or Tesla Pascal and for a dual socket Xeon that could support 8 GPU's you would be getting into the tens of thousands of dollars anyway.
In a few months time all this might change. High end systems will hopefully better support high end discrete GPU's and we might see additions to the Pascal range; a dual Pascal @ 300 to 375W TDP looks very doable to me and would make a lot of sense, albeit with a dollop of wishful thinking. In reality it would be at least as likely to arrive in a Titan as a GF, and neither could turn up. However, we saw app scaling issues with the 980 and it's blatantly there with the 1080, so a single GPU 1080Ti probably wouldn't scale at all well for here so a dual GPU option would make a lot more sense for the app and in terms of performance/Watt (which is alas more important to crunchers than gamers).
20) Message boards : Graphics cards (GPUs) : 980 runs for days! (Message 45994)
Posted 2672 days ago by Profile skgiven
The GPU has likely downclocked. MSI Afterburner, GPU-Z and so on would tell you (look for the core clock rate MHz). As well as configuring MSI Afterburner with a fan profile, you might want to give the GPU a clean.


Previous 10 | Next 10
//