Advanced search

Message boards : Graphics cards (GPUs) : New driver for nvidia

Author Message
Eckbert
Send message
Joined: 28 Dec 13
Posts: 1
Credit: 51,708,179
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 35121 - Posted: 18 Feb 2014 | 17:23:53 UTC

There is a new driver available for nvidia Gpu`s named 334.89 at this day.

Have a good time
Eckbert

Killersocke
Send message
Joined: 18 Oct 13
Posts: 53
Credit: 406,647,419
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35123 - Posted: 18 Feb 2014 | 17:27:18 UTC

thx :-)

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35154 - Posted: 19 Feb 2014 | 17:32:09 UTC - in response to Message 35121.
Last modified: 19 Feb 2014 | 17:33:02 UTC

My first impression is that 334.89 (on Win7) is similar to its Beta predecessor.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35282 - Posted: 23 Feb 2014 | 16:39:05 UTC
Last modified: 23 Feb 2014 | 16:56:04 UTC

Just updated to 334.89 from 332.21. I'm now seeing that ACEMD tasks are no longer using a full core each. Is this normal?

In Windows' Resource Monitor in the CPU usage it used to read 12 (1 full HT thread) on each ACEMD task. It now reads 3.
____________

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 35284 - Posted: 23 Feb 2014 | 16:56:58 UTC - in response to Message 35282.

Just updated to 334.89 from 332.21. I'm now seeing that ACEMD tasks are no longer using a full core each. Is this normal?


Yes, that's as expected with 334. Might see a minimal drop in performance for some WU, but offset that with greatly reduced CPU load.

Matt

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35285 - Posted: 23 Feb 2014 | 17:02:14 UTC - in response to Message 35284.

Excellent. Thank you for the quick reply.
____________

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35289 - Posted: 23 Feb 2014 | 19:50:23 UTC - in response to Message 35285.

I like the 334.89 driver for my 680/460 XP machines. Just a couple percent lower GPU utilization (dropped from 97/99% down to 95%/98%). But a whole lot lower CPU use for the 680. The 460 was already low to begin with.

I tried the 334.89 driver on my 780Ti machine back went back to the 331.82 driver. The GPU utilization dropped by 10-20% depending on WU. Dropped so low, the card was no longer running at maximum boost. Once I switched back, GPU utilization went back up.

Anyone else with 780Ti on Win7 see a big GPU utilization drop (as reported by EVGA Precision) when they enabled the 334.89?

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35295 - Posted: 23 Feb 2014 | 22:53:31 UTC - in response to Message 35289.

Thanks for this useful information Jeremy. I have downloaded the latest driver and was planning in installing, but reading your post I won't. My 780Ti is already in low GPU use, I like it higher not lower. Tried all your settings but the card will not boost, despite 70-72°C.
____________
Greetings from TJ

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35305 - Posted: 24 Feb 2014 | 2:14:11 UTC

Good to know about the 780Ti. I'm about to switch my 680s out for two of those.
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35318 - Posted: 24 Feb 2014 | 10:47:32 UTC - in response to Message 35305.

Good to know about the 780Ti. I'm about to switch my 680s out for two of those.

My 780Ti is still fast, twice as fast as my 660 but it can be faster if compared with Jeremy results, same OS as I have, or with XP or Linux.
I will switch my two 660 for one 790 or a 780Ti.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35350 - Posted: 25 Feb 2014 | 21:22:01 UTC

Matt, if this new driver causes such a performance hit on faster cards I can think of two things to do:

- post a news item, maybe also via the BOINC messages, to warn power crunchers of this new driver (and to inform people who'd rather free a CPU core)

- bring back the option to disable swan sync, if this mechanism still works - for people who'd rather dedicate a core for maximum GPU throughput

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35357 - Posted: 26 Feb 2014 | 13:55:38 UTC - in response to Message 35350.

With the same or similar CPU usage settings (75% in my case)I find no issues with the 334.89 driver. On my W7 system the apps are either the same speed or slightly faster. GTX670 and GTX770's only though (no GK110 cards). So perhaps it's just an issue on GK110 cards? I would want to see it tested by several others, and hear about the Boinc settings, before announcing it as a bad-egg (and then only for Win7 and GK110).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

lycnus
Send message
Joined: 18 Jan 09
Posts: 8
Credit: 196,775,113
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 35358 - Posted: 26 Feb 2014 | 14:20:55 UTC

I haven't noticed any difference on my Windows 7 computer with EVGA 780TI SC

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35359 - Posted: 26 Feb 2014 | 14:32:24 UTC - in response to Message 35357.

With the same or similar CPU usage settings (75% in my case)I find no issues with the 334.89 driver. On my W7 system the apps are either the same speed or slightly faster. GTX670 and GTX770's only though (no GK110 cards). So perhaps it's just an issue on GK110 cards? I would want to see it tested by several others, and hear about the Boinc settings, before announcing it as a bad-egg (and then only for Win7 and GK110).


I'm also running at 75% CPU usage with my 680s but will be upgrading to 780Tis either today or tomorrow. I'll report afterward and let you know what I find. I haven't noticed any difference in speed with the 680s running 334.89.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35362 - Posted: 26 Feb 2014 | 18:48:27 UTC

On a GT640 DDR3 I'm seeing a SR runtime increase from 34440s to 36270s, i.e. a 5% throughput reduction. I switched from WHQL 327 to 334.89. CPU usage is set to 100%, but actually running at about 7 of 8 cores loaded due to my app_configs.

In both directly comparable configs I have run 2 full WUs. That's not much, but statistical fluctuation on this box uses to be far less than the differences I'm seeing here.

Credit-wise that's not very much.. but still, a loss far greater than what another HT-core can gain in throughput. And it's not exactly a high performance card.

MrS
____________
Scanning for our furry friends since Jan 2002

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 35363 - Posted: 26 Feb 2014 | 19:37:13 UTC
Last modified: 26 Feb 2014 | 19:38:10 UTC

I recently updated my GTX 650Ti drivers to 320.57. I am curious and would like an explanation as to why my tasks are all CUDA 4.4 while everyone else appears to be processing CUDA 5.5 tasks. The CPU utilization and Run Time for all my tasks are too high. Can anyone help?

Thanks, John

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35370 - Posted: 26 Feb 2014 | 21:17:42 UTC - in response to Message 35363.

CUDA 5.5 work is distributed from a higher driver version onwards, if I remember correctly. The older ones support it officially, but it didn't work properly. The WHQL 327 or 332 driver should be fine, 334 as discussed here. For optimal runtimes your CPU should not run at 100% load.

Regarding CPU load: for a Kepler GPU (almost all series 600 and 700) it's normal that 1 logical CPU core is used all the time. Exception: the new driver discussed here.

MrS
____________
Scanning for our furry friends since Jan 2002

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 35377 - Posted: 27 Feb 2014 | 1:00:29 UTC - in response to Message 35370.

Thanks for the information: I will install new drivers shortly.

CUDA 5.5 work is distributed from a higher driver version onwards, if I remember correctly. The older ones support it officially, but it didn't work properly. The WHQL 327 or 332 driver should be fine, 334 as discussed here. For optimal runtimes your CPU should not run at 100% load.

Regarding CPU load: for a Kepler GPU (almost all series 600 and 700) it's normal that 1 logical CPU core is used all the time. Exception: the new driver discussed here.

MrS

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35381 - Posted: 27 Feb 2014 | 11:49:31 UTC - in response to Message 35377.
Last modified: 27 Feb 2014 | 11:49:50 UTC

The Windows app versions are 8.15 (cuda42) and 8.15 (cuda55).

Even with the latest WHQL drivers I still get both versions,

383x-SANTI_MARwtcap310-13-32-RND9504_0 5210585 26 Feb 2014 | 20:35:04 UTC 27 Feb 2014 | 7:01:14 UTC Completed and validated 32,363.52 10,592.70 115,650.00 Long runs (8-12 hours on fastest card) v8.15 (cuda42)

I'm not sure if the WU's actually use 4.2 or not; the run times are very similar. A few % either way could easily be down to other factors (system usage, CPU apps...). Anyway, the apps are both v8.15 for Windows. I expect 5.5 just removes a few bugs

211x-SANTI_MARwtcap310-26-32-RND6477_0 5208701 26 Feb 2014 | 8:51:09 UTC 26 Feb 2014 | 22:01:34 UTC Completed and validated 32,019.34 11,711.09 115,650.00 Long runs (8-12 hours on fastest card) v8.15 (cuda55)

On Linux I also get both 5.5 and 4.2 WU types, but the apps are 8.03 (cuda42) and 8.03 (cuda55).

I thought the Server was supposed to assign app versions to be used based on drivers (and possibly performance under each app type). Maybe it's not working quite right?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35396 - Posted: 28 Feb 2014 | 16:29:21 UTC

Just got my 780Tis in and running on 334.89. One is running SANTI_MAR at 71 - 73% utilization and the other is running NOELIA_FXA averaging around 79% utilization.

These are my first runs on these cards. Anyone else running on 780Ti with 334.89? I'm curious how my numbers compare. The GPU utilization dropped as compared against the 680 but I think that was to be expected.

I'm also still running CPU tasks at 75%. I checked at 50%, 62.5%, 75% and there does not seem to be an effect on the GPU tasks.
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35403 - Posted: 28 Feb 2014 | 23:15:34 UTC - in response to Message 35396.

I'm using driver 331.82 and have ~76% GPU use on the 780Ti. Run times are around 24000 seconds but varies from WU to WU. With XP or Linux times go under 20000 seconds, but have not tried that myself. But the card is faster then a GTX680, even with Win7-8-8.1
____________
Greetings from TJ

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35404 - Posted: 28 Feb 2014 | 23:26:35 UTC - in response to Message 35403.

Thanks. It looks like we're having similar problems with this card. I've posted a couple times on the "Poor times with 780 Ti" thread.

http://www.gpugrid.net/forum_thread.php?id=3583

Looks like it's not the new driver causing it.
____________

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35405 - Posted: 1 Mar 2014 | 0:09:06 UTC - in response to Message 35404.

Matt,

If you do not mind, I would be very curious if your utilization increases if you go and install the 331.82 drivers.

http://www.gpugrid.net/forum_thread.php?id=3634&nowrap=true#35285

This weekend, I will install the 334.89 driver again to ensure I duplicate my previous drop in utilization.

Regards,
Jeremy

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35406 - Posted: 1 Mar 2014 | 0:21:46 UTC - in response to Message 35405.
Last modified: 1 Mar 2014 | 0:26:02 UTC

Jeremy,

Thank you. I remember reading that post now that I've seen it again - even downloaded them at the time just in case. I'll give it a try tomorrow.
____________

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35407 - Posted: 1 Mar 2014 | 3:57:39 UTC

Got to it a bit early.

After I installed 331.82 I was running fine until tasks in progress completed. One card came back up to full boost with the new task, the other has gone back down to base speeds again with its new task. Both running SANTIs now, about 75% utilization which is just a couple points better than 334.89 as discussed in this thread.
____________

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35408 - Posted: 1 Mar 2014 | 4:19:51 UTC - in response to Message 35407.

Driver is 331.82 and OS is Win7 64. Have a Santi running on each 780Ti card. Both cards holding full boost (1187 and 1200). EVGA Precision says utilization is ~75 with low variation (jumping 72-77%).

Install the 334.89 as Clean Install. Reboot, and they come back up and both cards full boost and low variation at ~73 utilization.

Change from Adaptive to Prefer Max Performance, apply, reboot.

Now cards are at 45% utilization with high variation (jumping 40-60%). Cards not full boosting.
Reboot again, and now 73% (low variation).
Reboot again, and now 60% (high variation).
Reboot again, and now 72% (low variation).

Change back to Adaptive.
Reboot again, and now 73% (low variation).
Reboot again, and now 73% (low variation).
Reboot again, and now 73% (low variation).

Change back to Prefer Max
Reboot again, and now 50% (high variation).
Reboot again, and now 65% (high variation).
Reboot again, and now 70% (low variation).

So I think this is an Adaptive vs Max Performance.
Change back to Adaptive.
Reboot again, and now 73% (low variation).
Reboot again, and now 55% (high variation).
Reboot again, and now 60% (high variation).

Change back to Prefer Max
Reboot again, and now 73% (low variation).

Installed the 331.82 drivers with Clean Install which defaults to Adaptive
Back to 75% (low variation)
Reboot again, and now 75% (low variation).

Change to Prefer Max
Reboot again, and now 71% (low variation).

Change back to Adaptive.
Reboot again, and now 73% (low variation).

Not sure what is happening between reboots. Going to leave the 331.82 installed for now for the Win7-64 and 780 Ti cards.

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35411 - Posted: 1 Mar 2014 | 5:05:15 UTC - in response to Message 35408.
Last modified: 1 Mar 2014 | 5:06:35 UTC

Not sure what is happening between reboots.


When you stop crunching to reboot the GPU cools off. When it's rebooted and crunching again the temperature rises to the temp where the card downclocks to try to keep its temperature down. If you have adequate cooling the temperature will decrease and when it drops below the "magic temp" the clocks boost. If you don't have adequate cooling then the temp will not drop and the clocks will stay at the low speed until you quit crunching and reboot at which time the GPU cools off.
____________
BOINC <<--- credit whores, pedants, alien hunters

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35414 - Posted: 1 Mar 2014 | 9:39:22 UTC - in response to Message 35396.
Last modified: 1 Mar 2014 | 9:41:56 UTC

Just got my 780Tis in and running on 334.89. One is running SANTI_MAR at 71 - 73% utilization and the other is running NOELIA_FXA averaging around 79% utilization.

These are my first runs on these cards. Anyone else running on 780Ti with 334.89? I'm curious how my numbers compare. The GPU utilization dropped as compared against the 680 but I think that was to be expected.

I'm also still running CPU tasks at 75%. I checked at 50%, 62.5%, 75% and there does not seem to be an effect on the GPU tasks.

On my W7 rig, running 4 CPU tasks I got ~80% utilization for a SANTI_MAR and ~85% utilization for a NOELIA_FXA on high end GK104 cards. However I noticed that the tasks utilization drifted a bit during the run. At times it was as high as 87% for the NIELIA_FXA and 82% for the SANTI_MAR, but it was also a bit lower (79 & 83).

By varying CPU usage from 50% to 75% I do see some changes, 3 to 5% improvement at a 50% CPU setting in Boinc.

With higher cards it's likely that the WDDM introduces a larger overhead; it's being asked to do more/time. I see it as a bottleneck somewhat akin to the bus issues on a few cards (192bits causes some performance loss with the GTX660Ti, albeit along with cache size) - applied to larger cards it's worse. While you can tweak a GTX660Ti to optimize for performance/Watt, there is little that can be done to prevent the WDDM from interfering AFAIAA.

Jeremy, stop running CPU tasks and see what happens. Either you've found a bug in the driver or its because the CPU is being used. People should try to remember that CPU usage can hinder GPU tasks because the GPU's still need to use the GPU's. The interference varies by CPU application and by GPUGrid task type. Having a web page with video on it constantly refreshing will reduce performance as will screen-savers...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

biodoc
Send message
Joined: 26 Aug 08
Posts: 183
Credit: 6,772,414,375
RAC: 17,897,826
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35415 - Posted: 1 Mar 2014 | 10:56:25 UTC

I have some data on my 780Ti to share.

1) OS is version 16 of 64-bit linux mint cinnamon
2) 3930K on Asus sabertooth x79 MB
3) 1 CPU is "dedicated" to GPUGrid
4) Installed 331.49 drivers a few days back from Nvidia website
5) The last 20 WUs have been Santi and crunch time is under 20,000 seconds

http://www.gpugrid.net/results.php?hostid=166613

6) CPU utilization is about 25%
7) I have coolbits enabled so I can control GPU fan speed manually. Currently the fan speed is set to 65%.

Data from Nvidia Settings (New info like GPU utilization, memory used, PCI bandwith, from this driver release I think). This is a pleasant suprise for me.

1) GPU utilization varies quite a bit but usually from 80-90%
2) As Dagorath mentioned in this thread, if you keep the temp of the GPU well below maximum (82C on this card), the GPU boost clock will remain steady (1006 MHz on this card)
3) Fan speed at 65% which keeps the temp at 73C and thus a steady Max boost clock.


TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35416 - Posted: 1 Mar 2014 | 11:24:59 UTC

Mt GTX780Ti runs at 70°C at the moment and that is cool for this card, but it runs at 875.7MHz steady, GPU use ate 76% on Win7 5 threads do Rosetta the other 3 are to use by the GPU and other programs.
Occasionally the cards runs a bit faster to 900-915MHz, even when the temperature is higher. Driver is 331.82. It is not as fast as other can do with the same card but at least it runs error free. I will not change anything and buy a second one from EVGA as soon as they are available again. I have now an Asus one.
____________
Greetings from TJ

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35419 - Posted: 1 Mar 2014 | 11:31:44 UTC - in response to Message 35414.

When you stop crunching to reboot the GPU cools off. When it's rebooted and crunching again the temperature rises to the temp where the card downclocks to try to keep its temperature down. If you have adequate cooling the temperature will decrease and when it drops below the "magic temp" the clocks boost. If you don't have adequate cooling then the temp will not drop and the clocks will stay at the low speed until you quit crunching and reboot at which time the GPU cools off.


Dagorath, All of these utilizations were right after reboot. The temps were ~45 right at the start. Only let the system run for a couple minutes. The below WU finished last night and you can see all the reboots, driver changes, and the temperature ranges during the testing. The only thing I am trying to understand is why would utilization jump around so much after reboots. It seems "sticky". What ever the reboot sets, is where it stays. Of course I did not let it runs hours and/or finish a WU to see if is stayed at the low utilization. The utilizations that I had reported did not change or have any slope to them while I let the system run for the few minutes. Simply once BOINC started processing (after 30 sec delay), utilization goes from 0 to the number reported and just stayed flat with variation.
http://www.gpugrid.net/result.php?resultid=7844175


Jeremy, stop running CPU tasks and see what happens. Either you've found a bug in the driver or its because the CPU is being used. People should try to remember that CPU usage can hinder GPU tasks because the GPU's still need to use the GPU's. The interference varies by CPU application and by GPUGrid task type. Having a web page with video on it constantly refreshing will reduce performance as will screen-savers...


skgiven, before and after the series of reboots, I tried from 0 to 5 extra CPU tasks (not counting the 2 threads for the GPUGrid tasks), but I could not swing the needle noticeably on the utilization. I am talking as much as a 30% drop in GPU utilization after a reboot, not the single digit differences (if I remember correctly) that you had discussed. Also, it is from reboot to reboot. I will try the reboot series again with 0 extra cpu tasks to isolate it out just in case.[/quote]

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35433 - Posted: 1 Mar 2014 | 19:40:38 UTC - in response to Message 35419.

In the past there were issues with Prefer Maximum Performance selection, and I think they still exist in XP (you can't set it). It's likely there is an issue again for some GTX780Ti cards. If that's not the case perhaps the clocks/boost are not behaving normally. Are you using a start-up delay?
Jeremy and TJ, you are both using similar processors (i7-477x). Are there any Bios or CPU/bus updates available - just in case this oddity is due to some incompatibility?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35437 - Posted: 1 Mar 2014 | 22:36:41 UTC

Reading these posts an interesting idea approached me, regarding the lower GPU utilization when set to "maximum performance".

Assume the GPU would switch turbo states quite fast, faster than GPU-Z sees it or the driver reports it. Then it could be that the GPU drops clocks in short periods of low utilization in adaptive mode. Maximum performance would avoid this, stay at high clocks.. and thus report a lower utilization, because there was (briefly) not enough work for the entire GPU anyway.

This would result in similar runtimes for both modes, despite the different utilizations, if the adaptive mode works as it should. And it might increase power draw in maximum performance mode. Both predictions could easily be tested.. to very probably rule this theory out ;)

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35440 - Posted: 2 Mar 2014 | 1:21:26 UTC - in response to Message 35433.

In the past there were issues with Prefer Maximum Performance selection, and I think they still exist in XP (you can't set it). It's likely there is an issue again for some GTX780Ti cards. If that's not the case perhaps the clocks/boost are not behaving normally. Are you using a start-up delay?
Jeremy and TJ, you are both using similar processors (i7-477x). Are there any Bios or CPU/bus updates available - just in case this oddity is due to some incompatibility?

I have not a start delay and have recently update the MOBO BIOS as the USB 3.0 was not working with the Asus factory one.

Indeed Jeremy and I have same CPU (4770-4771), same OS and same settings.
Two differences: I have also a GTX770 in it both from Asus. Jeremy has EVGA.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35446 - Posted: 2 Mar 2014 | 12:00:19 UTC - in response to Message 35440.

Two differences: I have also a GTX770 in it both from Asus. Jeremy has EVGA.

The card manufacturer normally doesn't matter, it'S the same chip and drivers anyway. Your 2nd card will force both of your cards into PCI3 8x, but this has not mattered much before at GPU-Grid, so I doubt it contributes significantly to any runtime differences.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35463 - Posted: 2 Mar 2014 | 20:22:05 UTC - in response to Message 35446.
Last modified: 2 Mar 2014 | 21:18:38 UTC

I noticed my GTX770 was operating at 1045MHz today (334.89 driver) and 49% GPU Power Usage. When I opened up NVidia Control Panel and went to 3D settings, Power Management Mode was set to Adaptive. I changed this to Prefer Maximum Performance and in MSI Afterburner increased the clock by 78MHz and power limit by 5%. This made no difference to the frequency until I restarted the system. With previous drivers, the changes were immediate. Prior to installing 334.89 I had the Prefer Maximum Performance selected.
I was using an E3-1265L V2 Xeon CPU with the Turbo turned off in the Bios, so I turned Turbo on and allowed the OS to control it. It's now at 3100MHz and I'm running 4 CPU tasks. The GTX770 is now clocked at 1228MHz (17.5% faster than it was), and power usage is 67%.
I found out that the CPU doesn't like fast RAM (didn't work with 2133MHz) but it's a 45W TDP processor and I'm not running anything that needs fast RAM.

Note that after you install a driver NVidia Control Panel settings need to be reconfigured, and now it appears that you have to restart the system for them to stick.
There can also be problems with driver dependent software (such as MSI afterburner), so it's usually a good idea to reinstall these.

I upgraded Boinc this evening and afterwards the GTX770's back to 1045MHz and 50% GPU power!

AFAIC 334.89 is a bad driver. I'm going back to 332.21 and I suggest others do the same if the clocks drop for you too.

Over the last year or so NVidia have done some serious messing around with Power Settings. Whoever is at it, isn't very good at it!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35485 - Posted: 3 Mar 2014 | 19:13:04 UTC - in response to Message 35463.

I'm going back to 332.21 and I suggest others do the same if the clocks drop for you too.


I've come to the same conclusion over the last couple days. I tested 334.89 and 331.82 several times under different conditions (Adaptive vs Max Performance, reboot vs non-reboot between setting changes, clean install vs normal upgrade in GeForce Experience). Finally came back to 332.21 and performance has been rock-solid since. Both cards are at full boost and seem to be staying that way from WU to WU and my temps are even a couple degrees lower compared to 331.82 at the same GPU load. I really liked being able to run extra CPU tasks with 334.89, but it doesn't seem to give reliable performance with the 780Ti. It worked very well with my 680s, so I may use it with those when I get them back into a crunching box.

Of course, now that I've posted this I'm fully expecting things to start acting funny again...
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35498 - Posted: 4 Mar 2014 | 12:34:25 UTC - in response to Message 35485.

Hi Matt, I see your times of the 780Ti and am impressed. I have also Win7 but my times are 24000-27000 seconds. The cards runs ate 878.7MHz, Memery clock at 1749.6MHz and 69-70°C (GPU-Z). I run only 5 tasks on my CPU, 2 for GPU and 1 free.
Will you please tell me what you have done to get these times?
Thanks.
____________
Greetings from TJ

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35499 - Posted: 4 Mar 2014 | 14:16:08 UTC - in response to Message 35498.

Hi Matt, I see your times of the 780Ti and am impressed. I have also Win7 but my times are 24000-27000 seconds. The cards runs ate 878.7MHz, Memery clock at 1749.6MHz and 69-70°C (GPU-Z). I run only 5 tasks on my CPU, 2 for GPU and 1 free.
Will you please tell me what you have done to get these times?
Thanks.


I'm afraid there isn't much I can tell you. The cards' base clock is 980MHz and when they're at full boost, one runs at 1124MHz and the other at 1137Mhz. I've never had them downclock below their base speed. The base memory on the cards is 7000Mhz but I run in SLI for gaming so the reported memory speed per card is 3500MHz. The cards are typically staying in the 67C - 72C range depending on how warm the room is. I'm currently running CPU at 50% which translates to 3 CPU tasks when also running GPUGrid on 332.21 but I can run at least one more without affecting GPU performance.

The only setting I've changed is "Prefer Maximum Performance" vs the "Adaptive" default power usage in the Nvidia Control Panel.

Jacob Klein started a new thread on maintaining max boost on Kepler cards which looks promising. I've not implemented the technique yet, but I may soon if my performance doesn't stabilize more to my liking.


____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35500 - Posted: 4 Mar 2014 | 15:41:35 UTC - in response to Message 35499.

I have also set nVidia 3D to Prefer Maximum Performance as that is the advise here, but even when run a few days on Adaptive it doesn't matter I have tried.
We have the same OS, same driver and same temperature but a different clock speed.
Have you a program running like MSI Afterburner, EVGA Precision or one similar?

____________
Greetings from TJ

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35501 - Posted: 4 Mar 2014 | 15:48:15 UTC - in response to Message 35500.

I run Precision X but mostly just for monitoring the cards. I've tried using it in the past to overclock my GTX 680s, but that tends to cause tasks to fail on this project.
____________

ConflictingEmotions
Send message
Joined: 6 Jan 09
Posts: 4
Credit: 151,278,745
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 35502 - Posted: 4 Mar 2014 | 15:50:39 UTC - in response to Message 35485.

I'm going back to 332.21 and I suggest others do the same if the clocks drop for you too.

...

Of course, now that I've posted this I'm fully expecting things to start acting funny again...

I concur with both (especially after having to revert kernel to get old driver and the first WU failed).

Thanks for the all the info! On my Linux system (GTX 780) I found that while the new driver (or beta) dramatically dropped CPU usage there was a sufficient increase in times and my credit rate suffered. The difference between reporting times of 2 adjacent WUs with different drivers was 4 hours! The downside is that the system with the old driver is louder than new ones but I don't have the tools in place to see what is going on.

Under old driver:
http://www.gpugrid.net/workunit.php?wuid=5202339

536x-SANTI_MARwtcap310-12-32-RND6877_0
Run time 22,237.18
CPU time 22,016.17
Credit 115,650.00


Under the new driver:
http://www.gpugrid.net/workunit.php?wuid=5222500

579x-SANTI_MAR422cap310-15-32-RND3745_0
Run time 35,427.61
CPU time 3,789.13
Credit 115,650.00

Jozef J
Send message
Joined: 7 Jun 12
Posts: 112
Credit: 1,118,845,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35503 - Posted: 4 Mar 2014 | 17:07:53 UTC

Exactly the same result I have too. worsened's about four hours, the card gtx titanium and also the gtx 680.On the gtx Titan time I have run about 29,000 seconds, which is very wrong.
needs urgently to be to find solutions for all

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35582 - Posted: 10 Mar 2014 | 13:59:39 UTC - in response to Message 35503.
Last modified: 10 Mar 2014 | 14:08:55 UTC

335.23 WHQL released today ... I'll post back later on Win7 CPU usage
____________
Thanks - Steve

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35584 - Posted: 10 Mar 2014 | 15:19:49 UTC - in response to Message 35582.
Last modified: 10 Mar 2014 | 16:18:53 UTC

335.23 WHQL released today ... I'll post back later on Win7 CPU usage


Thanks for the heads up!

I have confirmed that, on Windows 8.1 x64, 335.23 uses the same "low CPU usage" on my Kepler device, as the 334.89 and 334.67 drivers.

Also, I'm doing testing on whether it will fall back from Max Boost for reason "Util" when the load doesn't meet a certain threshold. I suspect it will fall back. Therefor, I recommend using the following 2 posts to "Force Max Boost" on any Kepler system:
http://www.gpugrid.net/forum_thread.php?id=3647&nowrap=true#35410
http://www.gpugrid.net/forum_thread.php?id=3647&nowrap=true#35562

Edit: Yup, I confirmed that the behavior where it can downclock from Max Boost and stay downclocked, still exists. I completed a task, there was a brief 15 second pause between tasks (so it downclocked to 3D Base Mhz), then a new task started. Even at solid 82-84% GPU Usage, the GPU is not boosting back up at all. It's probably not considered a bug by NVIDIA, since in their eyes, there's not enough demand on the GPU to warrant boosting. So... I recommend forcing Max Boost, per the links I posted.

Regards,
Jacob

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35599 - Posted: 10 Mar 2014 | 21:21:37 UTC - in response to Message 35584.

I didn't read the other thread yet.. does the performance with these newer drivers still degrade when you force max clocks? Because on my GT640 GPU load is a constant 99% (-> max boost with any driver) and yet I was seeing a severe performance drop with the previous WHQL.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35605 - Posted: 11 Mar 2014 | 8:16:26 UTC - in response to Message 35599.
Last modified: 11 Mar 2014 | 8:19:05 UTC

The GT640 doesn't Boost, so it shouldn't be impacted the same way.

Perhaps using less CPU for the GPUGrid WU's is in some way to blame. Maybe you ran more CPU tasks (as an extra CPU core was free), or found that the Bus became more bottlenecked than before (GPU task related). Sometimes what's running on the CPU can greatly impact upon the GPU performance (some CPU apps run at higher priorities than others), and then there is the HD Graphics 4000 - if you ran Einstein (or other iGPU tasks) there is a chance that started gobbling up all the resources.

I'm running the 335.23 drivers (about 16h). So far no drop in Boost, but I have OC'ed both GPU's (just using MSI Afterburner) - 770@1254 & 670@1202. GPU usage varies by task; ~80% for SANTI_MAR and 92% for the latest GIANNI WU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

mikey
Send message
Joined: 2 Jan 09
Posts: 292
Credit: 2,266,128,615
RAC: 11,419,207
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35607 - Posted: 11 Mar 2014 | 11:16:13 UTC - in response to Message 35584.

335.23 WHQL released today ... I'll post back later on Win7 CPU usage


Thanks for the heads up!

I have confirmed that, on Windows 8.1 x64, 335.23 uses the same "low CPU usage" on my Kepler device, as the 334.89 and 334.67 drivers.

Also, I'm doing testing on whether it will fall back from Max Boost for reason "Util" when the load doesn't meet a certain threshold. I suspect it will fall back. Therefor, I recommend using the following 2 posts to "Force Max Boost" on any Kepler system:
http://www.gpugrid.net/forum_thread.php?id=3647&nowrap=true#35410
http://www.gpugrid.net/forum_thread.php?id=3647&nowrap=true#35562

Edit: Yup, I confirmed that the behavior where it can downclock from Max Boost and stay downclocked, still exists. I completed a task, there was a brief 15 second pause between tasks (so it downclocked to 3D Base Mhz), then a new task started. Even at solid 82-84% GPU Usage, the GPU is not boosting back up at all. It's probably not considered a bug by NVIDIA, since in their eyes, there's not enough demand on the GPU to warrant boosting. So... I recommend forcing Max Boost, per the links I posted.

Regards,
Jacob


Have you tried playing a game pausing it, then resuming it and seeing if the same thing happens? Because if it does then that WOULD be a major problem for all role playing gamers and Nvidia would be interested in fixing it. If it is only Boinc then maybe the Boinc programmers need to learn how to make their software trigger the speed back up feature like the games do. If on the other hand the card never slows down when the game is paused then that is a whole other kettle of fish.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35609 - Posted: 11 Mar 2014 | 12:01:21 UTC - in response to Message 35605.
Last modified: 11 Mar 2014 | 12:15:09 UTC

skgiven:
For one of those ~80% tasks... After it's been running for a while, suspend BOINC for about 6 minutes. Then resume BOINC. See if the clocks go back up to Max Boost. For me, if I recall, they generally didn't boost back up. That's why I had to force Max Boost.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35613 - Posted: 12 Mar 2014 | 8:23:12 UTC - in response to Message 35605.

The GT640 doesn't Boost, so it shouldn't be impacted the same way.

Right. That's why I asked whether you guys also see a performance drop even after you fix the boost clock.

Perhaps using less CPU for the GPUGrid WU's is in some way to blame.

Yes, that's the most obvious explanation.

Maybe you ran more CPU tasks (as an extra CPU core was free), or found that the Bus became more bottlenecked than before (GPU task related). Sometimes what's running on the CPU can greatly impact upon the GPU performance (some CPU apps run at higher priorities than others), and then there is the HD Graphics 4000 - if you ran Einstein (or other iGPU tasks) there is a chance that started gobbling up all the resources.

I ran one more CPU task, saw the result, then reverted back to the same CPU load as before (80-90%) and still didn't see performance where it used to be. In fact, there was next to no difference between +1 and +0 CPU tasks if I remember correctly.

The HD4000 ran Einstein tasks just as it did before (and no performance change here either). The mix of other CPU projects didn't change either. And why would the bus suddenly become a bottleneck when the GPU is doing less work?

The GT640 is a special card in the way that it's more memory bandwidth-limited than any other current card. It could be that nVidia changed some functions so that they require more bandwidth, which wouldn't impact cards not yet limited. But memory controller utilization did not really change either (didn't check for a few % difference).

Or it could be that the new sync'ing mechanism between CPU and GPU just costs some performance and you guys should also see it when you fix that boost clock. If this is true I'd expect the impact to increase with GPU speed.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35619 - Posted: 12 Mar 2014 | 20:37:09 UTC - in response to Message 35609.

skgiven:
For one of those ~80% tasks... After it's been running for a while, suspend BOINC for about 6 minutes. Then resume BOINC. See if the clocks go back up to Max Boost. For me, if I recall, they generally didn't boost back up. That's why I had to force Max Boost.

I have suspended several times for varying amounts of time (few minutes to a few hours) and my clocks go back up to my OC values, now 1254 and 1202MHz.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35620 - Posted: 12 Mar 2014 | 20:41:19 UTC - in response to Message 35619.

Well, thanks for testing. I wish I knew what my particular problem is. It's most obvious, for me, when a GPUGrid task completes. Due to my BOINC settings, it then begins downloading the new task, so there is a "lull" where there's no GPU activity, and then a couple minutes later, the new task kicks in, often not at Max Boost.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35622 - Posted: 12 Mar 2014 | 20:54:39 UTC - in response to Message 35613.

MrS, I found that the last 2 drivers were slightly faster on my W7 system with 2 GPU's than the 331.40 drivers, without changing any Boinc settings (use 75% of the CPU's). I'm not using the iGPU to crunch.

I expect that the lack of polling is exasperated due to crunching on your iGPU. Maybe a CPU cache issue. I've noticed that running high kernel work on the CPU (VM) causes my GPU's utilization to be more erratic, though some of this might be due to the recent drivers.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35629 - Posted: 13 Mar 2014 | 21:08:36 UTC
Last modified: 13 Mar 2014 | 21:09:20 UTC

Actually I tried the 335 yesterday and have completed 1.8 WUs since then - at normal crunching times, approximately within the margin of error! 34000 - 35000s per short task, whereas 334 WHQL landed at over 36000s. It could be that they fixed whatever was happening last time or that something just went wrong with my system last time... although that test lasted several days & WUs, with different settings.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35633 - Posted: 13 Mar 2014 | 23:19:13 UTC - in response to Message 35629.

If there isn't Gold at the end of a Rainbow, we can always hope there is...
Viel Glück!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35713 - Posted: 17 Mar 2014 | 21:29:05 UTC

I stand corrected - I don't see any performance drop any more with 335 WHQL on 2 hosts (GT640 and GTX660Ti)! The Boost state is a TDP-limited ~1.10 GHz at ~1.07 V just as before.

MrS
____________
Scanning for our furry friends since Jan 2002

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35730 - Posted: 19 Mar 2014 | 4:17:25 UTC - in response to Message 35713.

Finally had some time to work on testing GPU Utilization under a few different situations tonight.

First things. Last week had my computer start downclocking the GPU after trying the 335.23 drivers. Even after switching back to the 331.82 drivers still had the issue. The following sequence (strange as it may be) seems now for a week to work and prevent the downclocking.

1) In EVGA Precision, hit default and apply to cards.
2) Downclock 100 (-100) mhz for GPU speed.
3) Power computer down for a minute
4) Start back up, let BOINC start, and then load my favorite profile in EVGA Precision (Power Target = 105%, Prioritze Temp to 72C, +50Mhz Overclock).

Have not had the issue since. Cross fingers, knock on wood, etc.

Previously in this thread I discussed the reboots and resulting GPU Utilization was all over the place. The above reboot sequence also gets me a high steady utilization. Based on feedback skgiven says about extra cpu tasks, I wanted to see. So the following test is 331.83 vs 335.23 drivers; CPU Speed of 3.6 vs 4.1 Ghz (could not down clock any further), and then from 2 to 8 threads running. Two threads minimum due to two GPU's. Added one thread at a time which were SETI or Eninstein units.

The plotted data is GPU Usage as logged by EVGA Precision with 1 second polling. Polled each setup for one minute. Discarded 4 data points before and after the change in BOINC to increase threads (Use at most % of CPUs: 25, 38, 50, 63, 75, 88, 100%). Powered down between driver/CPU speed changes for one minute. App_config gives one full thread to each GPU.



So on my computer setup with Santi WU's
* 331.82 gives better GPU utilization over 335.23
* GPU Utilization drops after adding a 5th thread (third extra CPU task)
* Faster I can run the CPU the better 4.1Ghz was better than 3.6Ghz

Testing was with the following two WU's. You can see the restart driver version and temperatures.
http://www.gpugrid.net/result.php?resultid=7950990
http://www.gpugrid.net/result.php?resultid=7950534

End result is that the Santi WU's are processing around 18000-18500 seconds pretty consistantly now with 331.82, 50% CPU Usage, and 4.1ghz.

Post to thread

Message boards : Graphics cards (GPUs) : New driver for nvidia

//