Advanced search

Message boards : Graphics cards (GPUs) : GT240 and Linux: niceness and overclocking

Author Message
Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20399 - Posted: 11 Feb 2011 | 19:53:18 UTC

Hi guys!
I've just set up a system with an NVIDIA GT240 GPU card and a dual core CPU. It's running Linux Ubuntu 10.04 64bit.

Nvidia driver is 195.36.24: a 6.12 app has been downloaded, which I think is fine for this cuda card, and crunching is going up of 0.1% every minute, so the whole workunit should take about 60,000 seconds. It seems in average.

I'd like to let you know a thing I've noticed, and to ask you a couple of questions.

First, my "report". LOL.
The gpugrid wu now being crunched wants 0.16 CPUs (however it doesn't go beyond 2%-3% load) and the GPU. The app runs with a niceness of 10 by default - whilst other CPU boinc apps (boincsimap, WCG...) run with a niceness of 19. I've found that 10 vs 19 is not enough: when the CPU is saturated, even by "idle" applications running at 19, the gpu almost stops working, its temperature falls and the gpugrid app becomes ways slower: many many times slower. Renicing gpugrid app to -1 has given it back its normal speed.
I have not tested any other values for now.

So my first question is: is there a simple way to tell boinc to set to -1 the niceness for gpugrid apps?

My second question is about overclocking the GPU. I know about the

Option "Coolbits" "1"

line in the Device section of /etc/X11/xorg.conf.
But it gives the chance to overclock only the GPU and the memory, while I happen to know that is the shader frequency that matters most. How could I rise it?

Thanks in advance for everything,
and bye.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20400 - Posted: 11 Feb 2011 | 22:30:08 UTC - in response to Message 20399.
Last modified: 11 Feb 2011 | 22:39:51 UTC

If you free up a CPU core, use swan_sync=0, and report tasks immediately it should help a lot:

FAQ: Best configurations for GPUGRID

I don't know of a way to only overclock the shaders from within Linux.

From what I read the problem is that nice/renice settings tend to be lost on restart, but I read about a method that might stick over at WCG. Unfortunately I'm not a Linux expert and I cannot test it at the minute (don't even have a Linux system right now). Have a look and if you can work something out post it up, so others can benefit. This is worth a read.

If anyone has definite answers to these problems please post your methods.

Good luck,

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20402 - Posted: 11 Feb 2011 | 23:25:51 UTC - in response to Message 20400.
Last modified: 11 Feb 2011 | 23:42:30 UTC

If you free up a CPU core, use swan_sync=0, and report tasks immediately it should help a lot:


First af all, thanks a lot for your support! :)

Uhm... I think... No, I definitely do not want to waste 98% of a cpu thread (I have two cores without hyper threading), if I can have the exact same GPU efficiency through niceness adjustment (verified) while happily crunching two other CPU tasks that will be a tiny bit slower than usual.


I don't know of a way to only overclock the shaders from within Linux.


I suspected and feared it. :(
I'll keep on searching for a while, but I think I'm going to surrender.


From what I read the problem is that nice/renice settings tend to be lost on restart


Sure, but I've already put in /etc/rc.local the line:

renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda)

and in /etc/crontab the line:

*/5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1

which is quite enough for me now.

Thanks again. :)
Bye.

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20404 - Posted: 12 Feb 2011 | 11:06:15 UTC

Nope, that didn't work for me. I tried changing the niceness to -1 and then let rosetta@home run on all four cores on my i5 750, but rosetta@home effectively shut out the GPUGrid application (no meaningful work was being done by the GPU). This occurred even when the rosetta@home apps were running with a niceness of 19 and GPUGrid running with a niceness of -1.

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20405 - Posted: 12 Feb 2011 | 13:20:15 UTC - in response to Message 20404.

Nope, that didn't work for me. I tried changing the niceness to -1 and then let rosetta@home run on all four cores on my i5 750, but rosetta@home effectively shut out the GPUGrid application (no meaningful work was being done by the GPU). This occurred even when the rosetta@home apps were running with a niceness of 19 and GPUGrid running with a niceness of -1.


Sorry about it. But you're dealing with a gtx570 (fine card!) and 6.13 app, aren't you? Maybe this makes the difference.

The niceness trick is actually working for me with boincsimap 5.10 and WCG (FightAIDS@Home 6.07 and Help Conquer Cancer 6.08) on the CPU side.

You said Rosetta... It works for me with Rosetta Mini 2.17 too.

However my next try, probably tomorrow, will be to test the newest 270.18 nvidia driver, and see what happens with 6.13 gpugrid (someone is getting fine results even with a GT240 and 6.13).

Bye.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20406 - Posted: 12 Feb 2011 | 16:07:22 UTC - in response to Message 20405.

When I use swan_sync=0 and free up a CPU core on my GT240’s they now improve performance by around 7.5% (Phenom II-940, compared to running 4 CPU tasks and not using swan_sync). It use to be higher but recent tasks seem less reliant on the CPU (the GPU task priority is set to below normal by the project, while the CPU task priority is less; low). I’m using the 6.12 app. The 6.13 app is substantially slower for the GT240 cards, and while that might have changed, I doubt it. I have not tested the 270 driver, as I don’t have a Linux platform, but partially because none of the 260.x drivers I previously tested offered any improvement for the 6.13app and caused some of my cards to drop their speed. I would be very reluctant to install Linux just to test the 270.18 Beta for a GT240, but let us know how you get on, should you chose to (I suggest you don’t if you are unsure of how to uninstall it and revert to your present driver).

CPU usage depends on the GPU:
If for example you have a GT240 and a Q6600, and it takes 20h to complete one GPUGrid task, using say 350sec of CPU time, then the total amount of CPU time you would need to support a GTX470 would be about 4 times that, as a GTX470 would do 4 similar tasks in 20h. It now appears more important to use swan_sync=0 and free up a CPU core/thread for high end GPUs, but less so for entry level GPU’s.

With high end CPU’s this is not too much of a problem, you just need 1 core/thread from between 6 and 12 cores/threads, and if you have a GTX570 or GTX580 the choice is clear. If I had one single system with a dual core CPU and en entry level GPU I think I would be more likely to try to crunch on both CPU cores, unless it caused the GPU tasks to extend beyond 24h.

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20408 - Posted: 12 Feb 2011 | 18:34:56 UTC

To elucidate further, I am currently using Linux Mint 10 with kernel version 2.6.35-25-generic. My GTX 570 is using the latest stable Linux driver version, 260.19.36. And yes, I am using version 6.13 of the GPUGrid CUDA app. It certainly would be nice if the Rosetta app running on that fourth core would slow down a little bit just so that the GPUGrid app can get some CPU time in.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20414 - Posted: 13 Feb 2011 | 10:50:54 UTC - in response to Message 20408.

If that was my system I would be inclined to do some calculations.
How much quicker would your GTX570 be if you freed up one of those CPU cores, and how much faster would the other 3 CPU cores be when running Rosetta?
Elapsed Time - CPU Time

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 206
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20417 - Posted: 13 Feb 2011 | 12:02:55 UTC - in response to Message 20414.

Hi, I seem to change the priority of the GPU through the GUI "System Monitor" in Ubuntu 10.10, each time you load a new task, ie between 5 to 8 hours (for my GTX295) is no problem or nuisance.

Personally I would comment that I use the system to move the two tasks of the GPU of the priority level "10 " starts GPUGRID default to "0" when I'm working normal with the PC and move it to "-10" in hours that do not use it anyway with the highest priority is not low on overall response on the computer (i7-930, 6GB RAM). Greetings.
____________
http://stats.free-dc.org/cpidtagb.php?cpid=b4bdc04dfe39b1028b9c5d6fef3082b8&theme=9&cols=1

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20418 - Posted: 13 Feb 2011 | 12:24:46 UTC - in response to Message 20400.
Last modified: 13 Feb 2011 | 12:25:52 UTC


I don't know of a way to only overclock the shaders from within Linux.


Found this possible workaround (not yet tested): http://www.nvnews.net/vbulletin/showthread.php?t=158620

Bye.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20420 - Posted: 13 Feb 2011 | 16:23:13 UTC - in response to Message 20417.

Hi, I seem to change the priority of the GPU through the GUI "System Monitor" in Ubuntu 10.10, each time you load a new task, ie between 5 to 8 hours (for my GTX295) is no problem or nuisance.

Personally I would comment that I use the system to move the two tasks of the GPU of the priority level "10 " starts GPUGRID default to "0" when I'm working normal with the PC and move it to "-10" in hours that do not use it anyway with the highest priority is not low on overall response on the computer (i7-930, 6GB RAM). Greetings.

Same here, it's just every 28h +/-5h, depending on the WU, on my GT240.
All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues.
Changing the priority from 10 to 0 or even -3 increases the crunch speed big time, it's the only solution for my GT240 under Linux.

Fortunately Einstein now provides a reliable application for CUDA crunching under Linux, that does worthy science as well, so I manually stop Einstein every other day in the evening, DL a fresh GPUGrid-WU, set it manually to -3 and let it crunch for the next +/-28h, set GPUGrid to NNW asap, and set Einstein working again by hand once the GPUGrid is through.

Unfortunately sometimes Linux decides to set back the nice-factor to 10 during crunching, I don't know why and when, it looks unpredictable, and so I will loose precious crunching time because of the stubbornness of the app not to do what I want. I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20422 - Posted: 13 Feb 2011 | 17:33:15 UTC - in response to Message 20420.
Last modified: 13 Feb 2011 | 17:34:09 UTC

I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level.


I have put in my /etc/crontab the line:

*/5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1

So every 5 minutes a cronjob renices to -1 the task having name acemd2_6.12_x86_64-pc-linux-gnu__cuda (if it exists and if its niceness is different, otherwise it does nothing).

Modify the line according to your app name (6.13). You'll probably find the proper name by executing (as root!):

# ls /var/lib/boinc-client/projects/www.gpugrid.net/ |grep ace

You can also choose an other niceness, if -1 doesn't satisfy you. :)

HTH.
Bye.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20423 - Posted: 13 Feb 2011 | 18:00:35 UTC - in response to Message 20422.

I would very much appreciate a setting in my account or my BOINC (or my ubuntu, if there is a more permanent way of doing so outside System Monitor), that would keep the app at the desired nice level.


I have put in my /etc/crontab the line:

*/5 * * * * root renice -1 $(pidof acemd2_6.12_x86_64-pc-linux-gnu__cuda) > /dev/null 2>&1

It seems to work fine, thanks a lot

I still have to manually change between Einstein and GPUGrid, as otherwise I will not make the deadline here if BOINC switches between the apps, but that's nothing GPUGrid can do about (besides setting the deadline to the needed 48h), that's a BOINC problem.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20424 - Posted: 13 Feb 2011 | 18:38:01 UTC - in response to Message 20420.

Saenger,

All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues.

Yet another misleading and disrespectful message!

Your 98.671 ms per step performance for a GIANNI_DHFR1000 is exceptionally poor for a GT240

Using the recommended config (even on vista; over 11% slower than Linux), using swan_sync and the 6.12 app, 22.424 ms per step

Lem Novantotto, thanks for the Linux tips.
That hardware overclock method should work for your GT240, as it was tested on a GT220. Several users at GPUGrid have performed hardware OC's for Linux in the past. If it’s any help I tend to leave the core clock at stock, the voltage at stock and only OC the shaders to around 1600MHz, usually 1599MHz (this is stable on 6 GT240 cards I presently use). Others can OC to over 1640, but it depends on the GPU.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20425 - Posted: 13 Feb 2011 | 18:53:45 UTC - in response to Message 20424.
Last modified: 13 Feb 2011 | 18:54:09 UTC

Saenger,
All the stuff with swan_sync, freeing of a core or such doesn't change anything on this machine, it's just a smokescreen to pretend giving clues.

Yet another misleading and disrespectful message!

Your 98.671 ms per step performance for a GIANNI_DHFR1000 is exceptionally poor for a GT240

Using the recommended config (even on vista; over 11% slower than Linux), using swan_sync and the 6.12 app, 22.424 ms per step

Why do you ignore my messages?
I'm using this stupid swan_sync thingy, no f***ing use for it.
I've tried to "free a whole CPU" for it, the only effect was an idle CPU.

So don't talk to me about misleading and disrespectful!
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20427 - Posted: 13 Feb 2011 | 19:32:27 UTC - in response to Message 20425.

...and yet there is no mention of swan_sync in your task result details.

If it was in use the result details should say,
SWAN: Using synchronization method 0
and you would not have an idle CPU!

For example,
# Total amount of global memory: 497745920 bytes
# Number of multiprocessors: 12
# Number of cores: 96
SWAN: Using synchronization method 0
# Time per step (avg over 795000 steps): 22.424 ms
# Approximate elapsed time for entire WU: 44848.953 s
called boinc_finish

</stderr_txt>
]]>

Validate state Valid
Claimed credit 7491.18171296296
Granted credit 11236.7725694444
application version ACEMD2: GPU molecular dynamics v6.12 (cuda)

Your configuration is more suited to running Einstein and CPU tasks than GPUGrid tasks. So that is what you should do. What is the point in messing about every other day to run a different project, and at half efficiency or less?

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20429 - Posted: 13 Feb 2011 | 20:52:15 UTC - in response to Message 20427.
Last modified: 13 Feb 2011 | 20:54:45 UTC

...and yet there is no mention of swan_sync in your task result details.

I don't know how this stupid swan_sync stuff is supposed to work, it's your invention, not mine.

As I have posted in this post 66 days ago, and before that as well 82 days ago, and as I just tested again, my swan _sync is "0".

saenger@saenger-seiner-64:~$ echo $SWAN_SYNC
0


So if your precious swan_sync isn't working with my WU, as you claim, it's not my fault.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20430 - Posted: 14 Feb 2011 | 4:03:55 UTC

I'm beginning to suspect that the reason swan_sync isn't working is because the environment variable is associated with the wrong user. GPUGrid tasks don't run at the user level. Rather, the user is boinc.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20431 - Posted: 14 Feb 2011 | 7:14:01 UTC - in response to Message 20429.
Last modified: 14 Feb 2011 | 7:23:41 UTC

Using Linux is not easy, especially if you use several versions. As you know you can add export SWAN_SYNC=0 to your .bashrc file, but that is easier said than done, and depends on how/where you install Boinc. With 10.10 versions it is especially dificult; when I tried using it the repository only had the 260 driver, and some of the familiar commands did not work.
If you can't overclock or tune the fans properly and have niceness/swan_sync problems, the lure is not so strong - but this is down to lack of Linux knowledge/detailed instruction.

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20433 - Posted: 14 Feb 2011 | 10:38:19 UTC - in response to Message 20431.

I installed BOINC using the package manager. Rather than adding

export SWAN_SYNC=0
to my .bashrc file, I added it to /etc/bash.bashrc instead. I changed it because when I looked back at all of the tasks I did, even though I set swan_sync in my .bashrc file, the SWAN synchronization message has never showed up in any of the tasks I have done. That tells me that the GPUGrid task is not taking up the set environment variable in .bashrc. Perhaps placing it in /etc/bash.bashrc would help.

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20434 - Posted: 14 Feb 2011 | 10:45:09 UTC - in response to Message 20431.

As you know you can add export SWAN_SYNC=0 to your .bashrc file


That wouldn't work, skgiven.

A good place to set an *environment* variable is /etc/environment.

$ sudo cp /etc/environment /etc/environment.backup
$ echo 'SWAN_SYNC=0' | sudo tee -a /etc/environment

*Next* time boinc will run something, it will do it... "zeroswansyncing". ;)
It should, at least.

Bye.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20435 - Posted: 14 Feb 2011 | 14:51:15 UTC - in response to Message 20434.

Yes, my mistake. I’m working blind here (no Linux) and as you can tell, Linux is not my forte. It’s been months since I used any version. I found swan_sync fairly easy to use on Kubuntu 10.04 but struggled badly with Ubuntu 10.10. The commands are very different, even from Ubuntu 10.04; I had to use nautilus to get anywhere and change lots of security settings. Your entries look close to what I used, but I would have to dig out a notebook to confirm.
I’m reluctant to install 10.10 again, because I want to use the 6.12app with my GT240 cards, and found Ubuntu 10.10 too difficult to work with (security, swan_sync and driver issues, lots of updates). Although I could use it with my GTX470 cards, I need to accurately control the fan speed, and if I’m not mistaken it is automatic or 100% (too hot or too loud), and no in between? When I managed to use a 195.x driver with 10.10 (no idea how) I ended up with a 640x400 screen. An update attempt killed the system and a recovery attempt failed. Hence I’m back on Win. The possibility of overclocking my GT240 for Linux is very tempting, but at present I don’t have the time to try this.
Thanks for the posts. Linux expertise greatly appreciated.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20436 - Posted: 14 Feb 2011 | 15:22:28 UTC

I still fail to grasp why this extremely nerdy stuff isn't simply put in the app, especially as GPUGrid worked nearly fine and smooth until the last change of app, it just had to acknowledge it's use of a whole core, like Einstein does now, and everything would have been fine.

Now it's a project for nerds or windoze.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20438 - Posted: 14 Feb 2011 | 17:42:58 UTC - in response to Message 20436.

GDF did ask that some such configurations be facilitated via Boinc Manger. I guess the differences in various distributions would make it difficult.

At the minute I'm running my quad GT240 system without using swan_sync, on Vista x64. I'm running 3 greedy CPU tasks on a quad and using eFMer Priority x64 with reasonable success; after upping the shaders again I am now only 7.5 to 9.5% less efficient than using swan_sync and freeing up one CPU core per card. I want to increase CPU usage for another project for a while, so for now this is acceptable for me. eFMer is more like changing the nice value than using swan_sync.

While there are a few how to use Linux threads, there is not a sufficient how to optimize for Linux thread. If I get the time I will try to put one together, but such things are difficult when you are not a Linux guru.

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 206
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20442 - Posted: 14 Feb 2011 | 18:48:16 UTC - in response to Message 20438.

While there are a few how to use Linux threads, there is not a sufficient how to optimize for Linux thread. If I get the time I will try to put one together, but such things are difficult when you are not a Linux guru.


Hi, The truth is that I am not very knowledgeable in Linux, but I'm using Ubuntu 10.10 (before other versions for a year) and it works very well, better performance with Windows7.

The current Nvidia driver is 270.18 and my GTX295 works perfectly, does not exceed 62 º c and the fan control works (from 40% to 65%). Well-ventilated box.

As I said in another thread, just change the process priority (10 to - 10 or under interested me) allows extensive control and I have good yields.

For these types of jobs I have found it much better choice than Windows. Greetings.

____________
http://stats.free-dc.org/cpidtagb.php?cpid=b4bdc04dfe39b1028b9c5d6fef3082b8&theme=9&cols=1

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20443 - Posted: 14 Feb 2011 | 18:59:57 UTC - in response to Message 20442.
Last modified: 14 Feb 2011 | 19:21:07 UTC

I tried changing the niceness of the GPUGrid task to -10 (defaulted to 19). Then I set BOINC to use 100% of the processors. I wanted to see if the priority change would allow Rosetta@Home and GPUGrid to share CPU time in the fourth core. It still seems like Rosetta@Home is being greedy with the CPU, causing GPUGrid to slow down drastically. The Rosetta@Home task in the fourth core was using 99-100% of that particular core. The kicker was that the niceness for Rosetta@Home tasks is set at 19! It really appears that swan_sync doesn't do anything at all. It certainly isn't showing up in the stderr section when I look at my completed tasks.

Just to reiterate, I'm using Linux Mint 10, which is based on Ubuntu 10.10.

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20451 - Posted: 15 Feb 2011 | 11:28:57 UTC - in response to Message 20443.

I tried changing the niceness of the GPUGrid task to -10 (defaulted to 19). Then I set BOINC to use 100% of the processors. I wanted to see if the priority change would allow Rosetta@Home and GPUGrid to share CPU time in the fourth core. It still seems like Rosetta@Home is being greedy with the CPU, causing GPUGrid to slow down drastically. The Rosetta@Home task in the fourth core was using 99-100% of that particular core. The kicker was that the niceness for Rosetta@Home tasks is set at 19! It really appears that swan_sync doesn't do anything at all. It certainly isn't showing up in the stderr section when I look at my completed tasks.


Kirby, I'm running the 6.12 app, so I cannot replicate faithfully your environment. Please, open a terminal and run these commands:

1) top -u boinc

Would you please cut&paste the output?

Looking at the rightmost column, you'll immediately identify the gpugrid task. Read its "pid" (the leftmost value on its line). Let's call this number: PID.
Now press Q to exit top.

2) ps -p PID -o comm= && chrt -p PID && taskset -p PID
(changing PID with the number).

Cut&paste this second output, too.

Now repeat point 2 for a rosetta task, and cut&paste once again.

We'll be able to have a look of how things are going. Maybe we'll find something.

Bye.

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20454 - Posted: 15 Feb 2011 | 22:24:29 UTC - in response to Message 20451.
Last modified: 15 Feb 2011 | 22:27:27 UTC

acemd2_6.13_x86
pid 2993's current scheduling policy: SCHED_IDLE
pid 2993's current scheduling priority: 0
pid 2993's current affinity mask: f


minirosetta_2.1
pid 2181's current scheduling policy: SCHED_IDLE
pid 2181's current scheduling priority: 0
pid 2181's current affinity mask: f


As you can see, they're exactly the same. At this point, all four cores are being used, and GPUGrid has a niceness of -10.

EDIT: The percent completion is still incrementing on GPUGrid; it's just moving at a glacial pace. Normally when I only have three cores working on CPU tasks, GPUGrid tasks take about 4.5 hours to finish. With four cores enabled, this is looking to take 2x-2.5x longer.

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20455 - Posted: 15 Feb 2011 | 23:10:01 UTC - in response to Message 20454.

acemd2_6.13_x86
pid 2993's current scheduling policy: SCHED_IDLE
pid 2993's current scheduling priority: 0
pid 2993's current affinity mask: f


minirosetta_2.1
pid 2181's current scheduling policy: SCHED_IDLE
pid 2181's current scheduling priority: 0
pid 2181's current affinity mask: f


As you can see, they're exactly the same. At this point, all four cores are being used, and GPUGrid has a niceness of -10.


Here is the problem! :)

See my outputs with different tasks from different projects:

acemd2_6.12_x86
pid 15279's current scheduling policy: SCHED_OTHER
pid 15279's current scheduling priority: 0
pid 15279's current affinity mask: 3

wcg_faah_autodo
pid 29777's current scheduling policy: SCHED_BATCH
pid 29777's current scheduling priority: 0
pid 29777's current affinity mask: 3

simap_5.10_x86_
pid 15996's current scheduling policy: SCHED_BATCH
pid 15996's current scheduling priority: 0
pid 15996's current affinity mask: 3

minirosetta_2.1
pid 16527's current scheduling policy: SCHED_BATCH
pid 16527's current scheduling priority: 0
pid 16527's current affinity mask: 3

You see it, don't you? The problem is your sched_idle, mostly in the *gpugrid*
app.

Niceness is not priority itself: niceness is intended to affect priority (under certain circumstances). If you want to read something about priority:

$ man 2 sched_setscheduler


Try changing to SCHED_OTHER the scheduling policy for you *gpugrid* app:

$ sudo chrt --other -p 0 PID

(using its right PID - check with top).

Remember that, if it works, you have to do it every time a new task begins (you could set up a cronjob to do it, as we've seen for niceness).

Let me know.
Bye.

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20456 - Posted: 16 Feb 2011 | 1:00:30 UTC - in response to Message 20455.

It works! Now I can run four CPU tasks and a GPUGrid task at the same time! Thank you very much! This is much better than the swan_sync method that is often spoken of here.

Another thing: does this need to be in rc.local as well? Or would crontab suffice? Additionally, does the chrt command need the terminal output suppression thingy at the end in crontab? (... > /dev/null 2>&1)

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20458 - Posted: 16 Feb 2011 | 9:12:08 UTC - in response to Message 20456.
Last modified: 16 Feb 2011 | 9:48:51 UTC

It works!


I'm glad. :)
You're welcome.

Another thing: does this need to be in rc.local as well? Or would crontab suffice? Additionally, does the chrt command need the terminal output suppression thingy at the end in crontab? (... > /dev/null 2>&1)


Using a cronjob, we could forget about rc.local (even for the niceness thing).
However it doesn't hurt: rc.local is executed every time the runlevel is changed, so basically at boot (and at shutdown). Our cronjob runs every 5 minutes, so without rc.local we loose at most 5 minutes (as we obviously loose at most five minutes every time a new task starts), which is not so much with workunits that last many hours. But we can make it run more frequently, if we like. Every three minutes, for example. This entry takes care of both the scheduling policy and the niceness:

*/3 * * * * root chrt --other -p 0 $(pidof acemd_whatever_is_your_app) > /dev/null 2>&1 ; renice -1 $(pidof acemd_whatever_is_your_app) > /dev/null 2>&1

Bye.

P.S.
The above works with no more than one gpugrid task being crunched at the same time. The renice part works with even more, actually, but the chrt part doesn't: you can renice many tasks at once, but you cannot change the scheduling policy of more than one task per invocation. Let's generalize for any number of simultaneous gpugrid tasks:

*/3 * * * * root for p in $(pidof acemd_whatever_is_your_app) ; do chrt --other -p 0 $p > /dev/null 2>&1 ; done ; renice -1 $(pidof acemd_whatever_is_your_app) > /dev/null 2>&1

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20460 - Posted: 17 Feb 2011 | 5:47:07 UTC
Last modified: 17 Feb 2011 | 5:51:28 UTC

I've got something to compare:
After the post by Lem Novantotto I tried another version of using SWAN_SYNC, nothing with those non-existing .bashrc stuff, that project fanboys tried to impose, but this etc/environment stuff. It had some grave consequences:

I've got two similar WU, or at least they should have been according to the credit settings by the project, one before the change and one crunched yesterday.

Old one, 76-KASHIF_HIVPR_n1_bound_so_ba2-92-100-RND2370_0:
- crunched with nice factor -1
- crunched with SWAN_SYNC_0 according to fanboy instructions, but obviously not in reality according to stderr.out
- Time per step (avg over 575000 steps): 61.307 ms
- Run time 77,243.59 seconds
- CPU time 1,577.01 seconds

New one, 16-KASHIF_HIVPR_n1_bound_so_ba2-94-100-RND9931_0:
- crunched with nice factor 10
- crunched with SWAN_SYNC=0 according to Lem, this time mentioned in stderr.out
- Time per step (avg over 325000 steps): 82.817 ms
- Run time 102,772.46
- CPU time 102,095.10

As you can see the main difference is the usage of massive CPU-power with the result of significantly reducing the crunching speed.
It behaved like before the app change, i.e. it pretended to be 0.15 CPU + 1 GPU, while it used 1 CPU + 1 GPU, leaving 3 cores for the concurrent 4 WUS of other projects.
I started with both, nice forced to -1 and new swan_sync, but it left one core idling, somehow it gave the other 4 projects in parallel not more than 2 cores according to System Monitor, so I commented that line in crontab out.

This new method is definitely not useful, I will never try it again. It's a massive waste of resources.

Next try, after Einstein got it's usual share again, will be with a forced nice factor of 19, so the 4 cores will be divided evenly to the 5 WUs, as it worked fine with the old app.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20462 - Posted: 18 Feb 2011 | 18:06:33 UTC
Last modified: 18 Feb 2011 | 18:09:37 UTC

This time I got a Gianni, but I still have one of those with 7,491.18 credits in my list from before as well. Here's the data:

old: 251-GIANNI_DHFR1000-34-99-RND0842_1
- crunched with nice factor -1 or -3 I don't remember exactly
- crunched with SWAN_SYNC_0 according to fanboy instructions, but obviously not in reality according to stderr.out
- Time per step (avg over 25000 steps): 98.671 ms
- Run time 81,994.45 seconds
- CPU time 2,106.21 seconds

new: 800-GIANNI_DHFR1000-37-99-RND6435_1
- crunched with nice factor 19
- crunched with SWAN_SYNC=0 according to Lem, this time mentioned in stderr.out
- Time per step (avg over 1905000 steps): 41.600 ms
- Run time 83,551.62
- CPU time 66,219.42

So again no speed up, just a waste of CPU-power, but at least not slower than the old one ;)

I think I bugger this swan_sync rubbish and stick to the priority alone. It's simple and it works. and it's far more effective than wasting precious CPU-time for slow-down.

Edith says:
I don't have a clue what these ms/TS are, they are obviously something completely different for both WUs, TS don't seem to relate to work done. Credits are defined by project, so both WUs did the same amount of work according to project, the new one just needed more time steps for the same work done.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20463 - Posted: 18 Feb 2011 | 20:20:14 UTC - in response to Message 20462.

You have to leave a CPU core free when using swan_sync, otherwise its not going to be faster for the GPU.

There is no getting away from the fact that the 6.13app is slower for GT240 cards, which is why I use the 6.12app, albeit driver dependent. Your drivers are for Einstein, not GPUGrid.

Linux is generally faster than Windows XP, and Vista is > 11% slower than XP, yet I can finish a task using a GT240 on Vista in less than half the time you can:

Without using swan_sync, using eFMer Priority and running 3 CPU tasks on a quad core CPU,
506-GIANNI_DHFR1000-38-99-RND4572_1
Run time 47567.632002
CPU time 5514.854

Using swan_sync=0,
597-GIANNI_DHFR1000-33-99-RND7300_1
Run time 40999.983999
CPU time 35797.14

Both Claimed credit 7491.18171296296, Granted credit 11236.7725694444

Clearly using swan_sync is still faster (16%), if done correctly, and your Linux setup is poor (half the speed it could be).

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20464 - Posted: 18 Feb 2011 | 22:51:24 UTC - in response to Message 20463.

There is no getting away from the fact that the 6.13app is slower for GT240 cards, which is why I use the 6.12app, albeit driver dependent.


Yesterday I decided to give the 270.18 driver a try.

The cuda workunit I had in cache went like a charm with the good old 6.12 app; then a cuda31 workunit was downloaded, and it was a no go (even though *both* the apps, the former and the latter, almost saturated the GPU - the new driver can show this kind of info - and took their right slice of cpu time). I've got to come back to 195.36, lastly.

The problem - if we can call it a problem - is that every time boinc asks for new work, it previously uses a function to retrieve cpu and gpu specs on the fly, which seems appropriate. These specs are part of the request (they can be read in /var/lib/boinc-clien/sched_request_www.gpugrid.net.xml). Among these specs there is the cudaVersion, which is "3000" with older drivers, and "4000" with newer ones. I'm pretty sure the gpugrid server sends back a cuda31 wu (and the 6.13 app if needed) if it reads 4000, a cuda wu (6.12) otherwise.

Since the specs aren't stored in a file, but ratherly got from the driver on the fly, feign a 3000 cudaversion is not so easy. You should modify the boinc sources, and then recompile, to hide newer drivers response.

Sorry for possible mistakes and for my overwhelming bad English, I'm a bit tired today.

Goodnight (it's midnight here). :)

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 20465 - Posted: 18 Feb 2011 | 22:51:34 UTC - in response to Message 20463.
Last modified: 18 Feb 2011 | 22:54:12 UTC

You have to leave a CPU core free when using swan_sync, otherwise its not going to be faster for the GPU.

This answer is totally b***s***. It took a whole core in my first comparison example, and it was extremely slow.

There is no getting away from the fact that the 6.13app is slower for GT240 cards, which is why I use the 6.12app, albeit driver dependent. Your drivers are for Einstein, not GPUGrid.

The project team is giving me those WUs although it knows my setup. So it's their fault, and only their fault, to give 6.13 to GT240 instead of 6.12. They know my card, they actively decided to give me 6.13, so they say it's better for my machine. If they are too stupid to give me the right app, it's because of their lack of interest, not mine, As I said before: They only care about people who buy a new card for several hundred €/$ every few month.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 207
Credit: 1,669,151,456
RAC: 4,357,317
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20466 - Posted: 19 Feb 2011 | 1:34:26 UTC

I have to agree with Saenger on this one, which is a pretty rare thing. I have noticed no difference with swan_sync + free core. This is on a win7 machine with a 580, which is a different setup than this thread subject. But my impression is similar about these tweeks that are supposed to speed things up.
____________
Reno, NV
Team: SETI.USA

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20467 - Posted: 19 Feb 2011 | 9:32:34 UTC

Yep, I agree. There is no difference whatsoever with swan_sync on or off for my GTX 570. It will still run for about 4.5-5 hours.

On another note, it seems as if the server is running low on workunits to send out. I see only one workunit ready to send out. Could this be in preparation for the upcoming long workunits?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20470 - Posted: 19 Feb 2011 | 11:55:59 UTC - in response to Message 20467.

Did any of you restart (even just the x-server, not so handy in 10.10) after adding swan_sync=0? If you didn't that would explain your observations.


zombie67 [MM], you might have a bad OC:

3705076 17 Feb 2011 4:22:13 UTC 17 Feb 2011 15:27:33 UTC Completed and validated 16,526.66 16,764.16 7,491.18 11,236.77 ACEMD2: GPU molecular dynamics v6.13 (cuda31)
3703901 16 Feb 2011 23:16:48 UTC 17 Feb 2011 4:26:51 UTC Completed and validated 14,691.17 13,531.52 7,491.18 11,236.77 ACEMD2: GPU molecular dynamics v6.13 (cuda31)

Two identical tasks but 11% difference in completion time. Some other task times are also wayward. Your card is probably throttling back at times (it's a feature designed to stop it failing).


Lem Novantotto,
Do you think the 270.18 driver caused the card to run in low power/clock mode (a common issue on Win with recent drivers)?

Cuda 3.0 (3000)= 6.12app
Cuda 3.1 (3010) or [above] (for now)= 6.13app
[above] can be 3.2 or 4.0

I would not expect too much in the 270.18 driver for a GT240, just for the latest and next versions of Fermi.


Kirby54925,

There is no difference whatsoever with swan_sync on or off for my GTX 570. It will still run for about 4.5-5 hours.
You don't have swan_sync enabled and none of your tasks to as far back as 5th Feb have actually used swan_sync!

The ACEMD2 is at 70 WU's available. I don't know why, but perhaps they are letting them run down so they can start to use ACEMDLONG and ACEMD2 tasks, &/or they need to remove tasks in batches from the server. Thanks for the warning, I will keep an eye out and if I think I will run dry on my Fermi's I will allow MW tasks again.

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20471 - Posted: 19 Feb 2011 | 12:45:43 UTC - in response to Message 20470.


Lem Novantotto,
Do you think the 270.18 driver caused the card to run in low power/clock mode (a common issue on Win with recent drivers)?


The software showed the card was running at max clock, maximum performance, 95% GPU occupation. And it was running as hot as with the 195 driver.
So I think we can regard it as a fact.

The reason of the degraded performance must be elsewhere.


Cuda 3.0 (3000)= 6.12app
Cuda 3.1 (3010) or [above] (for now)= 6.13app
[above] can be 3.2 or 4.0


Yep.
The 170 driver reports cudaversion 4000: since 4000>=3010, the gpugrid server sends a cuda31 wu (that will be run by the 6.13 app).


I would not expect too much in the 270.18 driver for a GT240, just for the latest and next versions of Fermi.


I tried it just for the sake of curiosity, and indeed it was really too slow: a no go.

Bye.

zombie67 [MM]
Avatar
Send message
Joined: 16 Jul 07
Posts: 207
Credit: 1,669,151,456
RAC: 4,357,317
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20474 - Posted: 19 Feb 2011 | 17:58:04 UTC
Last modified: 19 Feb 2011 | 18:02:04 UTC

My card is running stock speed.

Other cuda projects run at consistent speeds, no variation.

Edit: and yes, I rebooted.
____________
Reno, NV
Team: SETI.USA

Kirby54925
Send message
Joined: 21 Jan 11
Posts: 31
Credit: 70,061,988
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 20477 - Posted: 19 Feb 2011 | 19:29:36 UTC - in response to Message 20470.

You don't have swan_sync enabled and none of your tasks to as far back as 5th Feb have actually used swan_sync!


Since February 5, I had swan_sync=0 in my .bashrc file. Then after reading a bit of Lem's post, I tried moving it to /etc/profile. That didn't work, so I put it in /etc/environment. It still doesn't show up in the workunit logs. And yes, I did reboot. Can't play any of my favorite games unless I switch over to Windows.

That said, manually manipulating the Linux CPU scheduler did the trick. It allowed the fourth core in my i5-750 to run both a CPU task and GPUGrid at the same time. GPUGrid still took the same amount of time to finish as before. True, the CPU task runs a tiny bit slower because it has to share CPU time with GPUGrid, but at least every core is running at 100%.

As for the shortage of workunits, I have DNETC as a backup for my GPU. Unlike GPUGrid (at least in the present), DNETC runs my GPU at 100% load.

Greg Beach
Avatar
Send message
Joined: 5 Jul 10
Posts: 21
Credit: 50,844,220
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 20506 - Posted: 24 Feb 2011 | 20:12:05 UTC

I previously tried to enable swan_sync with similar success to others in this thread. Even renice -1 had no effect. I decided to try it again and after adding it to /etc/rc.d/rc.local, /etc/bashrc, /etc/profile and /etc/environment I still could not get it to work.

I finally added the following to the start() section of the /etc/rc.d/init.d/boinc-client:

export SWAN_SYNC=0

and it worked. It gave my GT240 about a 20% improvement.

I don't know about other distros but that's what it took on my Fedora 14 x86_64 system to get swan_sync enabled.

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20513 - Posted: 25 Feb 2011 | 17:48:11 UTC - in response to Message 20506.

I previously tried to enable swan_sync with similar success to others in this thread. Even renice -1 had no effect. I decided to try it again and after adding it to /etc/rc.d/rc.local, /etc/bashrc, /etc/profile and /etc/environment I still could not get it to work.


Greg,

Your solution works flawlessly. Well done!

Since I don't know Fedora, could you please help me understand? I suspect that Fedora's PAM configuration is slightly different with respect to Ubuntu's one. I guess that running:

$ grep "pam_env.so" /etc/pam.d

you'll get some output containing "readenv=0". Is it true?

Thanks.
Bye.

Greg Beach
Avatar
Send message
Joined: 5 Jul 10
Posts: 21
Credit: 50,844,220
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 20515 - Posted: 25 Feb 2011 | 19:51:24 UTC - in response to Message 20513.

Your solution works flawlessly. Well done!

Since I don't know Fedora, could you please help me understand? I suspect that Fedora's PAM configuration is slightly different with respect to Ubuntu's one. I guess that running:

$ grep "pam_env.so" /etc/pam.d

you'll get some output containing "readenv=0". Is it true?

Thanks.
Bye.

Glad it worked for you.

I ran the grep command and there were no entries on my Fedora system with readenv. I assume that implies readenv=0.

Out of curiosity, since I haven't done a lot of monkeying around with PAM I ran the same command in my Ubuntu install (running in VMWare) and it shows a number of entries with readenv=1.

Greg

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20520 - Posted: 25 Feb 2011 | 22:09:39 UTC - in response to Message 20515.


Glad it worked for you.


Ah, no, I do not actually use swan_sync. But I'm happy you found a way to have it working on your system, despite my advice about /etc/environment didn't help you.

Here I have no need to save a whole cpu core for gpugrid: I have exacly the same performance without wasting cpu power. Having just 2 cores, both are precious for me. :)


I ran the grep command and there were no entries on my Fedora system with readenv. I assume that implies readenv=0.

Out of curiosity, since I haven't done a lot of monkeying around with PAM I ran the same command in my Ubuntu install (running in VMWare) and it shows a number of entries with readenv=1.


Yep. However I'm pretty sure readenv=1 is the PAM default (readenv=1 tells pam_env.so to read /etc/environment).

Lacking any readenv=0, I do not understand why your /etc/environment isn't read. I may be wrong about the default PAM behaviour, of course.
I'll try a little bit harder.

Bye, and thank you again. :)

Lem Novantotto
Send message
Joined: 11 Feb 11
Posts: 18
Credit: 377,139
RAC: 0
Level

Scientific publications
watwatwatwatwat
Message 20527 - Posted: 26 Feb 2011 | 11:09:23 UTC - in response to Message 20520.

However I'm pretty sure readenv=1 is the PAM default (readenv=1 tells pam_env.so to read /etc/environment).

Lacking any readenv=0, I do not understand why your /etc/environment isn't read. I may be wrong about the default PAM behaviour, of course.
I'll try a little bit harder.

Bye, and thank you again. :)


I've just tried on a live virtualized fedora 14, and /etc/environment is actually read.
I don't know why on your system it is not. Sorry, I have to give it up.

Bye.

Greg Beach
Avatar
Send message
Joined: 5 Jul 10
Posts: 21
Credit: 50,844,220
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 20542 - Posted: 28 Feb 2011 | 16:46:49 UTC - in response to Message 20527.

However I'm pretty sure readenv=1 is the PAM default (readenv=1 tells pam_env.so to read /etc/environment).

Lacking any readenv=0, I do not understand why your /etc/environment isn't read. I may be wrong about the default PAM behaviour, of course.
I'll try a little bit harder.

Bye, and thank you again. :)


I've just tried on a live virtualized fedora 14, and /etc/environment is actually read.
I don't know why on your system it is not. Sorry, I have to give it up.

Bye.

I don't know what to tell you. I set up SWAN_SYNC in /etc/environment. I was able to open a command window and echo the SWAN_SYNC value no problem. So the default readenv=1 behaviour is there. However, it had no impact on the behaviour of the acedm2 app.

Only when I add the "export SWAN_SYNC=0" command to the boinc-client init script did acedm2 enable SWAN_SYNC.

Profile Stoneageman
Avatar
Send message
Joined: 25 May 09
Posts: 224
Credit: 34,057,224,498
RAC: 190
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20546 - Posted: 28 Feb 2011 | 18:20:49 UTC

My 4x580 Ubuntu rig has been working fine for some time using swan_sync=0 in /etc/environment. Yesterday I had to relocate it, but after booting up I noticed it was no longer using full cores. I typed env in the terminal and it showed swan_sync=0 was listed. However, upon checking /etc/environment, swan_sync was missing. Added it back and now ok again.

Greg Beach
Avatar
Send message
Joined: 5 Jul 10
Posts: 21
Credit: 50,844,220
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 20547 - Posted: 28 Feb 2011 | 20:45:00 UTC

After the discussion here I've noticed that there are some differences with the way the two most common distros, Ubuntu and Fedora/Red Hat, process the environment.

I've just changed my system to put the "export SWAN_SYNC=0" command in the /etc/sysconfig/boinc-client file. I believe the equivalent file on Ubuntu is /etc/default/boinc-client.

This has the advantage of limiting the scope of the SWAN_SYNC value to the BOINC client environment and shouldn't be affected by any operating system or BOINC client updates.

Any thoughts?

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 206
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 20548 - Posted: 28 Feb 2011 | 21:14:55 UTC - in response to Message 20547.

Hello: Well, I have configured my Ubuntu 10.10_64bits with SWAN_SYNC in "etc / environment" working perfectly.

Also comment that I find it interesting to modify the configuration of BOINC Manager Preferences + Advanced + Processor Usage = "Switching between applications each = 15 minutes instead of the normal 60 minutes.

With SWAN_SYNC=0 places the CPU connected to the GPU, in my case two to have a GTX295, 100% and so every 15 minutes change the processor in use by the GPU to avoid overheating and better distributing the load on CPU multicores. Greetings.
____________
http://stats.free-dc.org/cpidtagb.php?cpid=b4bdc04dfe39b1028b9c5d6fef3082b8&theme=9&cols=1

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 21747 - Posted: 28 Jul 2011 | 4:57:54 UTC - in response to Message 20462.
Last modified: 28 Jul 2011 | 5:38:27 UTC

I think I bugger this swan_sync rubbish and stick to the priority alone. It's simple and it works. and it's far more effective than wasting precious CPU-time for slow-down.

As Einstein had some difficulties with WU creation this week I crunched some new WUs, this time with the new app 6.14. Unfortunately I forgot to delete the swan_sync rubbish before, and so I wasted precious resources with this idiotic stuff again.

93-KASHIF_HIVPR_GS_so_ba1-8-100-RND2726_1 96,510.67 1,336.56 12,822.18 16,027.72

98-KASHIF_HIVPR_cut_ba1-45-100-RND5086_1 95,885.94 74,305.64 5,929.17 7,411.47


The upper one was without swan_sync, the lower one with it. Niceness was 19 with the lower one, and -5 with the upper one. You can clearly see, both took about the same amount of wall clock time, but without swan_sync the CPU-time was extremely less. You can as well see that the upper one had done more than double the scientific work as the lower one, as credits, given as a fix by the project, are more than double for the upper one.

To sum it up:
Swan_sync on a GT240 is a giant waste of resources.
Fanboys who proclaim otherwise are either dumb or liars.
Niceness is crucial, swan_sync is detrimental.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21755 - Posted: 28 Jul 2011 | 11:43:25 UTC - in response to Message 21747.

To sum it up:
Swan_sync on a GT240 is a giant waste of resources.

No wonder, SWAN_SYNC was introduced to handle the low GPU usage of Fermi based cards.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21763 - Posted: 28 Jul 2011 | 15:56:50 UTC - in response to Message 21747.

Sanger, you are comparing two different task types, using different niceness settings and concluding that swan_sync makes the tasks slower, despite knowing that SWAN_SYNC would need to be used alongside a free core even if it did make much difference for that GPU type. Starving the GPUGrid task of any CPU time would obviously make it less efficient.
By the way, the new app is 6.15 not 6.14, and it's for Windows not Linux.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 21766 - Posted: 29 Jul 2011 | 15:33:41 UTC - in response to Message 21763.
Last modified: 29 Jul 2011 | 16:10:07 UTC

Sanger, you are comparing two different task types, using different niceness settings and concluding that swan_sync makes the tasks slower, despite knowing that SWAN_SYNC would need to be used alongside a free core even if it did make much difference for that GPU type. Starving the GPUGrid task of any CPU time would obviously make it less efficient.
By the way, the new app is 6.15 not 6.14, and it's for Windows not Linux.

The app is called ACEMD2: GPU molecular dynamics v6.14 (cuda31) for Linux, as can easily be seen on the apps page, the old app was called ACEMD2: GPU molecular dynamics v6.13 (cuda31).

As I said before in other posts, it uses a whole CPU if swan_sync is put to 0 and nice stays at -10 as delivered by project.
The use of more CPU-power is detrimental to the clock time of the WU, I have shown several times.

If a WU gets the same amount of credit granted as another, it's about the same as the other in size. Credits are determined by the project unrelated to anything on my machine.
If a WU gets more credits than the other, it has done more scientific work.
I've crunched 2 WUs of the same type (KASHIF_HIVPR), both took the same amount of time, but one used for the whole time the CPU as well, the other nearly not.
The one without using the CPU did far more than double the scientific work done in this time, as is shown in more than double credits, than the one without much CPU-use.

Obviously Starving the GPUGrid task of any CPU time makes it really really much more efficient.

Edith says:

As my thanks were gone as well, here they are again (to Message 21755):
To sum it up:
Swan_sync on a GT240 is a giant waste of resources.

No wonder, SWAN_SYNC was introduced to handle the low GPU usage of Fermi based cards.

Thanks for this very useful information. I would have appreciated this to have come from the project people, I don't know why they said otherwise.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21770 - Posted: 29 Jul 2011 | 22:29:39 UTC - in response to Message 21766.

To sum it up:
Swan_sync on a GT240 is a giant waste of resources.

No wonder, SWAN_SYNC was introduced to handle the low GPU usage of Fermi based cards.

Thanks for this very useful information. I would have appreciated this to have come from the project people, I don't know why they said otherwise.

I will quote myself from the distant past (321days ago),
"If a CPU core is allocated to the Fermi GPU it significantly increases the GPU speed, especially if you use SWAN_SYNC=0. This is the recommended configuration for Fermi users, especially GF100 cards".

I expect SWAN_SYNC would make little or no difference for a 9800GT, or any other CC1.1 card; it is mainly for Fermi's (317days ago).

"The Windows only optional variable SWAN_SYNC=0 is for Fermi's and does not have to be used along with one free core but it usually helps a lot. It will make little or no difference to the performance of a 9800GT. There is little need to leave a CPU core free, unless you have 3 or 4 such cards in the same system, at which point your CPU performance for CPU only tasks will degenerate to the point that you might as well free up a CPU core. On a high end Fermi it is still optional but generally recommended to use both SWAN_SYNC=0 and to leave a Core/Thread free; the performance difference is quite noticeable." (266days ago)

"For a GTX260 this is not the situation; you don’t need to free up a CPU core or use SWAN_SYNC=0" (251 days ago)

It's worth remembering that there have been 5 different apps since SWAN_SYNC was originally only used for Linux (within the app by default). At different times SWAN_SYNC either made or didn't make a difference for Linux or Windows, if setup correctly, for different cards in the past. Whatever the situation you or anyone else previously found themselves in is no longer relevant; we are onto 6.14 and 6.15. I have not tried any GPU under Linux for the present 6.14app, and found that my GT240's kept downclocking with the latest Win drivers, so I pulled those cards.

I think both Windows and Linux apps were named 6.14 and released on the 12th Jun 2011, with only the Windows version being subsequently updated to 6.15 (on the 6th Jul); thread priority being configurable in the Windows app but not the Linux app.

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 21771 - Posted: 30 Jul 2011 | 15:02:25 UTC - in response to Message 21770.

I will quote myself from the distant past (321days ago),

OK, that was the thread where gdf tried to push the swan_sync-stuff to me with force, starting here.
The thread after the fiasco with the new rubbish 6.12 app.

I expect SWAN_SYNC would make little or no difference for a 9800GT, or any other CC1.1 card; it is mainly for Fermi's (317days ago).

I never looked in that thread, title was nothing that looked like it would be helpful for my problems, it's about hardware, while my prob is software.


"The Windows only optional variable SWAN_SYNC=0 is for Fermi's and does not have to be used along with one free core but it usually helps a lot. It will make little or no difference to the performance of a 9800GT. There is little need to leave a CPU core free, unless you have 3 or 4 such cards in the same system, at which point your CPU performance for CPU only tasks will degenerate to the point that you might as well free up a CPU core. On a high end Fermi it is still optional but generally recommended to use both SWAN_SYNC=0 and to leave a Core/Thread free; the performance difference is quite noticeable." (266days ago)

The title "Lots of failures" doesn't sound like something for my prob, why should I look it up there?
And if it was "Windows only" in this thread, why did you try so hard to make me use it under Linux in the other one?

"For a GTX260 this is not the situation; you don’t need to free up a CPU core or use SWAN_SYNC=0" (251 days ago)
Thread title "Fermi", why should I look in there?

It's worth remembering that there have been 5 different apps since SWAN_SYNC was originally only used for Linux (within the app by default). At different times SWAN_SYNC either made or didn't make a difference for Linux or Windows, if setup correctly, for different cards in the past. Whatever the situation you or anyone else previously found themselves in is no longer relevant; we are onto 6.14 and 6.15. I have not tried any GPU under Linux for the present 6.14app, and found that my GT240's kept downclocking with the latest Win drivers, so I pulled those cards.

I think both Windows and Linux apps were named 6.14 and released on the 12th Jun 2011, with only the Windows version being subsequently updated to 6.15 (on the 6th Jul); thread priority being configurable in the Windows app but not the Linux app.


That's absolutely irrelevant insider talk for me, it's apps delivered by the project, that worked fine with the name 6.05 attached, that went unusable with 6.12 attached, and then all you project people tried to
a) make me use the newest drivers, otherwise I would have problems
b) make me use swan_sync=0, otherwise I would have problems²
c) make me use older drivers, otherwise I would have problems

² the ways to make swan_sync=0 work were told me in different ways by the project people, none was helpful, I managed to make it work with the help of Leo, and it showed the use was completely rubbish.

____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21773 - Posted: 30 Jul 2011 | 22:02:48 UTC - in response to Message 21770.

I think both Windows and Linux apps were named 6.14 and released on the 12th Jun 2011, with only the Windows version being subsequently updated to 6.15 (on the 6th Jul); thread priority being configurable in the Windows app but not the Linux app.

How exactly can I configure the thread priority in the Windows app?
I can't recall and I can't find any messages about doing this.
Perhaps you mean thread priority being configurable by the programmers?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 21789 - Posted: 2 Aug 2011 | 10:48:17 UTC - in response to Message 21773.

"thread priority being configurable in the Windows app"
Yes, I meant configurable by the programmers for the ACEMD apps, not by us.
You could view the priorities from Process Explorer, but you cannot configure them.

ATM the different apps and tasks perform to a variety of standards (GPU Utilizations). The short tasks for the Windows 6.15app seem to run at higher GPU Utilization, and without using SWAN_SYNC; While running an IBUCH_GCY task, I'm seeing a steady 95 to 96% GPU Utilization, without SWAN_SYNC on, and without freeing up a CPU thread (i7-2600). Was getting around 85% for the longer tasks (with more credit/h). For a GTX260 using SWAN_SYNC on W7 and freeing up a CPU core only increased the GPU utilization by 4% for the long tasks. Don't know what the performance is like for the 6.14app for different GPU types. I expect it is still significantly higher for Fermi's when using SWAN_SYNC and freeing up a CPU core, only slightly for the CC1.3 cards, and would make no difference to the other cards (unless you are not using the CPU at all and the CPU downclocks, in which case SWAN_SYNC forces the CPU core to stay at a high frequency).

Post to thread

Message boards : Graphics cards (GPUs) : GT240 and Linux: niceness and overclocking

//