1) Message boards : News : CPU jobs on Linux (Message 48836)
Posted 2269 days ago by Profile Bikermatt

* CPU threads are limited to 4 (you should still be able to crunch multiple WUs at once, please check)

Let us know.


Boinc is still assigning all of my 32 threads to one task even though it is only using 4.
2) Message boards : Number crunching : ADRIA_FAAH_WT batch (Message 46460)
Posted 2633 days ago by Profile Bikermatt
I picked up one of these on my Win7 I7-6800K host with two GTX 980s. The task is 73% complete at 22 hours. I am only getting 67% GPU utilization but I am running 8 threads of evolution at home on the CPUs. With CPU tasks suspended GPU utilization is 79%.
3) Message boards : Graphics cards (GPUs) : GTX 10x0 Under utilised in GPU and RAM (Message 46022)
Posted 2671 days ago by Profile Bikermatt
You could try running 2 tasks at once with a app_config.xml file in you projects directory. The latest GERARD_MO_ tasks were only giving me 54% GPU utilization on GTX980s in a win7 box with swan_sync enabled. Running two at a time I am getting 80% load. Same story with the GERARD_NONAGGRE tasks on my GTX670s.

<app_config>
<app>
<name>acemdbeta</name>
<gpu_versions>
<gpu_usage>1</gpu_usage>
<cpu_usage>1.00</cpu_usage>
</gpu_versions>
</app>
<app>
<name>acemdlong</name>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>1.00</cpu_usage>
</gpu_versions>
</app>
<app>
<name>acemdshort</name>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>1.00</cpu_usage>
</gpu_versions>
</app>
</app_config>
4) Message boards : Number crunching : All Gerard WUs erroring (Message 42615)
Posted 3022 days ago by Profile Bikermatt
One day more without Linux crunching and without status info...who cares?


Someone didn't get their nap today.


No, don't be a jerk. This has been a known problem with a known cause for a week now and no one has bothered to fix it.

For many years there was a significant performance boost when crunching with Linux at this project. The developers actually recommended that you crunch with Linux. Many of us have dedicated Linux hosts to this project due to that fact. Now my Linux hosts are having to crunch mathematics crap and look for pulsars to keep my house warm.

Could someone please fix this?
5) Message boards : News : Old Noelia WUs (Message 29330)
Posted 4036 days ago by Profile Bikermatt
I just noticed I have two Noelia WU on my linux boxes for the first time in a few weeks. They were both stuck at 0% and the boxes had to be rebooted to get the gpu running again.
6) Message boards : News : Old Noelia WUs (Message 29070)
Posted 4065 days ago by Profile Bikermatt
Been seeing multiple cases where new WU are not checkpointing
(or showing progress in BOINC Manager). Have aborted a couple
without improvement. Running on client v7.0.27 under linux x86.


Yes I am still having problems in linux also. I thought it was just the NOELIAs but the new NATHANs are doing it also. The tasks will lock up or remain at 0% and the system has to be rebooted for the GPU to work again on any project. Might be the new app, my linux sytems had not needed a reboot in months before this.
7) Message boards : News : New project in long queue (Message 28942)
Posted 4070 days ago by Profile Bikermatt
The Noelia workunits refuse to run on my 660ti linux system. They lock up or make no progress. I have finished one on two different linux systems with 670s without problems.
8) Message boards : News : New task on long queue from Nate, named RSP1120528 (Message 25510)
Posted 4341 days ago by Profile Bikermatt

IMHO, this project is one that requires 24/7 crunching even on short tasks because in some cases, say getting a short-queue WU, crunching it a bit, shutting down the computer and then restarting calculation 24 or more hours later, you don't have the bonus.

And that short-queue tasks are less-valued by the project evidenced by the fact that the project gives significantly less credits for crunching them invites, IMHO, volunteers to not crunch them, which means that those results will take longer to get

Whether the project likes it or not, the "bonus" will cause uses to the chase credits that the project gives - as I see it.


Yes, this is what I was trying to say earlier.
These tasks were run on the same 570:

10,800.46 sec 8,100.00 points
32,500.30 sec 81,000.00 points

3x the run time = 10x the credit

These tasks were run on the same gt240:

72,378.51 sec 8,100.00 points
291,447.05 sec 56,000.00 points

4x the run time = 7x the credit

From a credit standpoint it isn't worth it to run the short tasks even on slow cards that do not get any bonus at all.
9) Message boards : News : New task on long queue from Nate, named RSP1120528 (Message 25450)
Posted 4344 days ago by Profile Bikermatt
Long runs are for people with high end cards though. If you don't have a high end, and you don't want your card to crunch for days to complete one. That's exactly what the short queue is for.


Unfortunately if you compare the credit granted per second of GPU time for the long versus short task you will notice the short tasks grant 2-3 times less credit for the amount of time crunched.

Here is an example: http://www.gpugrid.net/results.php?hostid=98879

This is why people run the long tasks on slower cards. If the short tasks paid better more people would crunch them.

If the credit granted is based on the work done during the task is this credit discrepancy due to short tasks not doing the same amount of work per second or is it due to how the bonus is calculated?
10) Message boards : News : New task on long queue, significantly longer than traditional tasks (Message 23894)
Posted 4427 days ago by Profile Bikermatt
Man, I thought I was going crazy when I saw this task running for over 15 hours, before I read this post. I thought my card had downclocked or something, but windows wasn't reporting anything unusual. I just let it run and it finished fine, in about 20 hours on my overclocked GTX570.



I thought about aborting one myself. Luckily when I saw one it only had a hour left so I let it finish. These are running at around 20 hours on a stock clocked GTX470 in Linux. Unfortunately with a 115 GB upload I cannot get them turned in fast enough.

I have my cache set to 0.01 but they still download a few hours before they start so I run out of time.


Next 10
//