Advanced search

Message boards : Number crunching : Maintaining Max Boost MHz on Kepler GPU using NVIDIA Inspector

Author Message
Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35410 - Posted: 1 Mar 2014 | 4:58:13 UTC
Last modified: 1 Mar 2014 | 4:59:26 UTC

I recently was having a problem where, even though a GPUGrid work unit was running, my GPU was not using the Max Boost Mhz. The same issue was happening in my game, iRacing, too.

I did some research, and found that there is a way to force it. See here:
http://www.overclock.net/t/1267918/guide-nvidia-inspector-gtx670-680-disable-boost-fixed-clock-speed-undervolting
This is a pretty nice guide on force-overclocking.
You can run: nvidiaInspector.exe /?
... to see the list of command-line arguments.

To fix my particular problem:
1) I ran: nvidiaInspector.exe -forcepstate:2,2
Note: The first '2' is the GPU index, and in nVidia Inspector's dropdown, my Kepler is last in the list, with a (2), which is why I used 2 here.
2) In nVidia Inspector, on the P2 GPU Clock, I unlocked the Max, and set it to my Max Boost (1241 Mhz). I actually had to set it to 1242 here for it to work correctly.

My GPU clock now stays at 1241, when a GPUGrid task is running, and when I'm gaming, even if there is only light load. Mainly, though, it assures that the GPUGrid tasks get done as quickly as possible, since previously it wasn't using Max Boost.

Hope this helps someone!
Jacob Klein

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 35423 - Posted: 1 Mar 2014 | 14:34:13 UTC - in response to Message 35410.

I've found that max boost can usually be sustained by keeping the GPU temperature below 80C.

If you are interested, have a poke about in the output progress.log file and plot the temperatures and instantaneous performance (ns/day) that are periodically reported. That'll give you an idea of how consistent your performance is (in optimal circumstances the rate should have little variation)

Matt

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35424 - Posted: 1 Mar 2014 | 14:50:00 UTC - in response to Message 35423.

You don't understand. I am keeping temps below 70.

GPU-Z actually reports that the reason for performance limitating was "Util", meaning "GPU Utilization", meaning that the GPUGrid task is not pushing the GPU hard enough for it to think it needs the boost.

Same thing for iRacing.

It may be a bug in the drivers, not sure. But it's causing it to not stay at Max Boost, and I needed a fix.

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 35425 - Posted: 1 Mar 2014 | 15:03:26 UTC - in response to Message 35424.

Huh, that's bit rubbish. Presumably it's a side effect of the overhead of WDM that's keeping the GPU load so low.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35426 - Posted: 1 Mar 2014 | 15:22:08 UTC - in response to Message 35424.
Last modified: 1 Mar 2014 | 15:31:58 UTC

It seems to me that, on my GTX 660 Ti on 334.89 drivers, it doesn't Boost at all unless it can get at least 88% constant GPU Load. With my CPUs fully loaded, many GPUGrid tasks get less than that.

I can prove this all by starting BOINC fully loaded, watching it in GPU-Z with a sensor refresh rate of 0.1 sec, watch it not get 88%, then change "Use at most x% of the processors" down in increments (from 100, to 88, 75, 68, 50, etc.), until a threshold is reached. That threshold is 88% GPU Load. At that point, it boosts to Max, and PerfCap changes from "Util" to "VRel, Vop".

And so, again, I wanted a fix. I'm not sure if it means results will get completed faster or not.

Apparently 88 mph wasn't the only "88" threshold required for advanced science.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35477 - Posted: 3 Mar 2014 | 6:32:21 UTC - in response to Message 35426.
Last modified: 3 Mar 2014 | 7:10:30 UTC

It's a bug in the 334.89 drivers. They are preventing some GPU's from Boosting.

On my W7 system the GTX770 sometimes ran at the base clock, only boosting after a restart. GPU Power dropped to ~49% when it could have been 65 to 70%. For me that resulted in a 17.5% loss of performance in the 770 (1045 instead of 1228MHz). This is with Prefer Maximum Performance set. It appears that the Boost is dropped when the app momentarily utilizes less GPU, but doesn't increase normally again. I really don't see the point in ignoring user settings - it's my GPU and I want to be able to configure it!

    Instead of lowering the GPU core clock frequency, the hardware and software use other methods to put the GPU into a low power state when the GPU is idle or in response to changing application requirements. This ensures optimum power use while continuing to provide high graphics performance. Ref. 334.89 manual


The second GPU is a non-reference GTX670 (normally operates at ~54C on air while crunching). It kept boosting normally.
Others have reported similar experiences.

We've been through similar driver problems in the past; since 280. I'm recommending the previous WHQL driver (332.21) until this is fixed.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile [AF>Amis des Lapins] Phil...
Send message
Joined: 16 Jul 13
Posts: 56
Credit: 1,626,354,890
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35480 - Posted: 3 Mar 2014 | 13:02:46 UTC
Last modified: 3 Mar 2014 | 13:26:02 UTC

Hello !

A short message, for information only, to tell you I am running 2 * GTX 770 Gigabyte ref GV-N770OC-4GD, on an ASUS Z-87 Pro MB + i7 4770K.
Driver: 334.89
RAM = DDR3 Corsair vengeance 2 x 8 go 1600 Mhz CL10 LP

I am using EVGA Precision X in order to manually increase the GPU's fans speed : T° + 10 = Fan Speed (%)(I use the same fan config on my PNY 660Ti)

No OC, target power set at 100 % - Using the Default options.

Usage : 24/7

1 card is running All The Time at 1240 Mhz - 1.2 V and the other All The Time at 1215 Mhz - 1.87 V

GPU's Temp : 59° / 62 ° - Air Cooling - Case open + small Antec external fans to help heat dissipation.

The 3 fans of these GPU are great, but I am thinking about installing WC.

First step done, just ordered a CPU WC Kit.

No problem with this config, except sometimes if I have to switch off the computer or BM. Sometimes this action leads to computation errors.

Kind Regards,

Philippe

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35481 - Posted: 3 Mar 2014 | 13:16:20 UTC

I gave feedback here:
https://forums.geforce.com/default/topic/690370/geforce-drivers/official-nvidia-334-89-whql-display-driver-feedback-thread-released-2-18-14-/post/4141663/#4141663

I don't know if we'll get a response.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35482 - Posted: 3 Mar 2014 | 14:51:39 UTC - in response to Message 35480.

1 card is running All The Time at 1240 Mhz - 1.2 V and the other All The Time at 1215 Mhz - 1.87 V

This is actually 1.187V, right?

Jozef J
Send message
Joined: 7 Jun 12
Posts: 112
Credit: 1,118,845,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35484 - Posted: 3 Mar 2014 | 19:11:48 UTC - in response to Message 35424.

You don't understand. I am keeping temps below 70.

GPU-Z actually reports that the reason for performance limitating was "Util", meaning "GPU Utilization", meaning that the GPUGrid task is not pushing the GPU hard enough for it to think it needs the boost.

Same thing for iRacing.

It may be a bug in the drivers, not sure. But it's causing it to not stay at Max Boost, and I needed a fix.



I I have exactly the same problem on my GTX Titans..
1 card is on perfCap -Util 71 Cels gpuload 70% cca
2 and 3 -VRel 78- 62 gpuload 60%--50% ..no more
I set the speed on all the cards on 3800RPM

But it does not help. After searching about this problem on the internet, I discovered that this problem has a lot of people from Kepler NVDIA graphics card and older..
It's probably a mistake in the design of PCB graphics cards from other manufacturers as well as a software nvidia driver..

It seems to me that maybe the problem started Install latest nvidia driver but by the people you read on the internet, did not help pure reinstal os and drivers.
So it's a problem nvidia

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35486 - Posted: 3 Mar 2014 | 19:14:19 UTC
Last modified: 3 Mar 2014 | 19:15:25 UTC

I agree that it sounds like NVIDIA drivers are not properly recognizing that the GPU is being used, or perhaps the "utilization curve" that determines when downclocking occurs is not ideal for us.

Did you try my workaround, in the first post? It's been working nicely for me.

Jozef J
Send message
Joined: 7 Jun 12
Posts: 112
Credit: 1,118,845,172
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 35487 - Posted: 3 Mar 2014 | 19:54:49 UTC - in response to Message 35486.

I agree that it sounds like NVIDIA drivers are not properly recognizing that the GPU is being used, or perhaps the "utilization curve" that determines when downclocking occurs is not ideal for us.

Did you try my workaround, in the first post? It's been working nicely for me.


I set on nvidia inspector 1 card according to gpu-z from under a VRel- 980mhz est.max but it did not work,but the card is underclocked to 880 now, something preventing him from overclocking and temperature as hell are not,,

card 2-3 are mild overclock, max 1000 and 980 MHz, so no success, and the difference compared to about 50 MHz to boost, and card temperatures are the same as I have wrote above..It must also be a problem in driver

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35494 - Posted: 4 Mar 2014 | 4:38:57 UTC - in response to Message 35486.

Did you try my workaround, in the first post? It's been working nicely for me.


This sounds great as I am STILL getting one (and only ever one - GPU0 for some reason but never GPU1) of my cards not boosting or boosting and then throttling back to base speed.

I may give this a try in the next couple days if I see the behavior continue. Is that 1241Mhz on your card overclocked at all, or just the normal max boost? Last time I tried any overclocking on GPUGrid I ended up with a lot of failed WUs.
____________

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35495 - Posted: 4 Mar 2014 | 4:43:24 UTC - in response to Message 35494.
Last modified: 4 Mar 2014 | 4:44:52 UTC

I have a factory-overclocked eVGA GTX 660 Ti 3GB FTW GPU.
The factory overclocks are: base 3D clock 1045 Mhz, Max Boost clock 1241 Mhz.

I don't apply any additional overclocking, but I do apply a custom fan curve to keep the temp below 70*C. That way, it runs at Max Boost clock and yields very successful results.

The workaround in post 1, keeps it at Max Boost clock even when the drivers would normally back it off to the base 3D clock because of utilization.

Profile [AF>Amis des Lapins] Phil...
Send message
Joined: 16 Jul 13
Posts: 56
Credit: 1,626,354,890
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35496 - Posted: 4 Mar 2014 | 10:24:07 UTC - in response to Message 35482.

1 card is running All The Time at 1240 Mhz - 1.2 V and the other All The Time at 1215 Mhz - 1.87 V

This is actually 1.187V, right?


Hello,

You're totally right, my mistake ....

Thank You

Philippe

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35520 - Posted: 5 Mar 2014 | 12:32:29 UTC - in response to Message 35481.
Last modified: 5 Mar 2014 | 12:33:46 UTC

I got a response in the driver feedback thread.
https://forums.geforce.com/default/topic/690370/geforce-drivers/official-nvidia-334-89-whql-display-driver-feedback-thread-released-2-18-14-/post/4143297/#4143297

Apparently, the "Prefer maximum performance" setting is tied to "choosing a P-state", and has no bearing/effect on GPU Boost.

Also, they are requesting that I send them my vBIOS, which I will do shortly.

For reference, I created 2 .bat files to tinker with forcing max boost of 1241 Mhz, on my "index 2" eVGA GTX 660 Ti 3GB FTW:

"Force Max Boost.bat"
"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" -setGpuClock:2,2,1242 -forcepstate:2,2

"Reset To Default.bat"
"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" -restoreAllPStates:2 -forcepstate:2,16

If anyone knows how to easily reproduce this problem, please let me know. I think I'll have to uninstall my driver, then reinstall my driver, to reproduce it... since I believe that monkeying with NVIDIA Inspector has put my setup in a situation where it doesn't reproduce.

Time to reinstall the driver to see if I can get this to reproduce.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35527 - Posted: 5 Mar 2014 | 18:26:32 UTC
Last modified: 5 Mar 2014 | 18:39:26 UTC

So.. today I uninstalled 334.89, then reinstalled it, so that I could be sure that I was running the "stock P-states" without any NVIDIA Inspector command lines to possibly interfere.

Sure enough, I'm watching my GPU right now... GPUGrid is using ~82% GPU Usage like normal, power is at 76%, yet the clock is at 1058 Mhz (instead of max boost 1241 Mhz), and GPU-Z reports PerfCap Reason: Util. Even if I suspend all BOINC tasks except for GPUGrid.net... the GPU Load on this SANTI_MAR420 task goes up to 86%, yet the clock remains low at 1058 Mhz, and doesn't boost.

So, for anyone that wants to maintain Max Boost while using these drivers, I still recommend the link in the first post of this thread.

In the NVIDIA Forums, ManuelG requested I try to determine which driver this started with. So I will now uninstall 334.89 (and all its components), and then see if I can reproduce it on the previous version (I keep local copies). So I'll attempt to reproduce it on the following: 334.67, 332.21, 331.93, 331.82.. until I find the version where it doesn't happen anymore.

Wish me luck!
Jacob

ConflictingEmotions
Send message
Joined: 6 Jan 09
Posts: 4
Credit: 151,278,745
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 35528 - Posted: 5 Mar 2014 | 19:13:17 UTC - in response to Message 35527.

In the NVIDIA Forums, ManuelG requested I try to determine which driver this started with. So I will now uninstall 334.89 (and all its components), and then see if I can reproduce it on the previous version (I keep local copies). So I'll attempt to reproduce it on the following: 334.67, 332.21, 331.93, 331.82.. until I find the version where it doesn't happen anymore.

Wish me luck!
Jacob

Hopefully 332.21 (I'm on Linux)!
See the Graphics cards (GPUs): New driver for nvidia thread:
http://www.gpugrid.net/forum_thread.php?id=3634

Here's to you finding it!

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35539 - Posted: 6 Mar 2014 | 20:38:50 UTC

334.67 is also exhibiting the behavior, where a GPUGrid task at ~86% GPU Utilization was not getting Max Boost due to PerfCapReason "Util". Note: I think this is the first Windows driver that incorporated the "decreased CPU Polling" change.

Time to uninstall it 334.67, and reinstall 332.21, to test it.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35540 - Posted: 6 Mar 2014 | 20:53:46 UTC - in response to Message 35528.

In the NVIDIA Forums, ManuelG requested I try to determine which driver this started with. So I will now uninstall 334.89 (and all its components), and then see if I can reproduce it on the previous version (I keep local copies). So I'll attempt to reproduce it on the following: 334.67, 332.21, 331.93, 331.82.. until I find the version where it doesn't happen anymore.

Wish me luck!
Jacob

Hopefully 332.21 (I'm on Linux)!
See the Graphics cards (GPUs): New driver for nvidia thread:
http://www.gpugrid.net/forum_thread.php?id=3634

Here's to you finding it!

It's not an issue for me with 332.21 on Windows (version functionality might vary on Linux WRT the numbering). It either turned up with the Beta, which I only tested for a few days, or the WHQL. As other people found some performance drop with the WHQL, while I didn't, it most likely turned up then (and was possibly exasperated with the subsequent WHQL).

Apparently, the "Prefer maximum performance" setting is tied to "choosing a P-state", and has no bearing/effect on GPU Boost.

So it doesn't do not what it says on the tin then?!? Another Bogus setting!
We need to be able to control our GPU's, not have NVidia decide what they should be doing.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35561 - Posted: 8 Mar 2014 | 11:58:28 UTC
Last modified: 8 Mar 2014 | 12:27:32 UTC

Now this is just silly. I'm running the 332.21 drivers, as a test, and... well, I woke up, and sure enough, it was clocked at 1045 instead of the expected 1241, on an e1s55_1-GIANNI_ntl9b task.

GPU-Z said Util, and a task was running, at around 75% GPU Usage. I suspended everything except that single GPUGrid task, and usage went up to about 83%. But the clock remained low, for reason Util. Even last night, in my game (iRacing), if the GPU Usage got low enough, the driver would unapply the Boost.

I conclude that 332.21 is actually affected. It also seems that the "reason" for this problem is that the GPU Usage is lower than the threshold that GPU Boost thinks is necessary to apply Boost. It may be designed that way intentionally, and may not be a bug, as NVIDIA is all about conserving power, and Boost is unrelated to the "Prefer maximum performance" setting.

This sucks. I know GPU Usage depends on the task type running on the GPU, but... if it's sufficiently low, Boost will not be applied by the driver, unless you force it using the workaround in the first post.

If I run FurMark for a second or two, I can see Boost kick back in, and the clock goes to max boost. But then, if I turn it off and wait several hours, eventually it will unboost.

So, I think I'm not going to reinstall the latest drivers, and make "Forcing Max Boost" a part of my startup routine, likely via a .bat file. If anyone has any other ideas or suggestions, please let me know.

Thanks,
Jacob

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35562 - Posted: 8 Mar 2014 | 14:08:36 UTC - in response to Message 35561.

So, previously I was using Precision-X to set my Power Target (from 100% to 140%), and I'd have to set it again, every time Windows was restarted.

But, since I now want to set nVidiaInspector settings at startup, via a .bat file, I figured now would be a great time for me to set the Power Target there, too.

So, I have 2 .bat files, custom tailored for my GPU and settings. I'm setting GPU Index 2, because I have 3 GPUs, and I want to change settings for my eVGA GTX 660 Ti 3GB FTW, which shows as "(2)" in the dropdown in the nVidiaInspector UI. This GPU supports a Power Target of up to 140%. The commands I'm using in the main .bat file are: Set the P2 GPU Clock to 1242, Force any 3D application to use the P2 state, and set the Power Target to 140%.

Force Max Boost.bat
"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" -setGpuClock:2,2,1242 -forcepstate:2,2 -setPowerTarget:2,140

Reset To Default.bat
"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" -restoreAllPStates:2 -forcepstate:2,16

... and so, I created a shortcut to that "Force Max Boost.bat" file, in the following folder:
C:\Users\jacob_000\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup

It appears to be working well for me, but I will continue to test.

Thanks,
Jacob

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35585 - Posted: 10 Mar 2014 | 16:21:20 UTC

I confirmed that the behavior where it can downclock from Max Boost and stay downclocked, still exists in 335.23. I completed a GPUGrid task, there was a brief 15 second pause between tasks (so it downclocked to 3D Base Mhz), then a new task started. Even at solid 82-84% GPU Usage with the new task, the GPU is not boosting back up at all. It's probably not considered a bug by NVIDIA, since in their eyes, I guess there's not enough demand on the GPU to warrant boosting.

So... If you don't mind the increased Wattage usage, I recommend forcing Max Boost, per the post right above this.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35587 - Posted: 10 Mar 2014 | 17:09:30 UTC - in response to Message 35410.

I did some research, and found that there is a way to force it. See here:
http://www.overclock.net/t/1267918/guide-nvidia-inspector-gtx670-680-disable-boost-fixed-clock-speed-undervolting
This is a pretty nice guide on force-overclocking.

Hope this helps someone!
Jacob Klein

Thanks. I don't see the under-clocking on my GTX 660s and 650 Ti, but I did notice that one of my GTX 660s was now running at only 1.162 volts instead of its usual 1.175 volts, and I could not figure out why. It is now on a different motherboard and OS than before, but you would not think that would affect the voltage. But the case is also different, and it runs a bit hotter now. As noted in that article, above 70C the voltage on the cards is reduced, and that is exactly what happened to me. I didn't see that on my other GTX 660, but that is because I had already flashed the BIOS in that one, and has set the voltage to a fixed value already.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36320 - Posted: 16 Apr 2014 | 2:49:22 UTC
Last modified: 16 Apr 2014 | 3:03:50 UTC

I wrote this tutorial for Forcing Max Boost, for my iRacing friends, but I'll post it here too, as it lays things out step by step.

Force Max Boost on a Kepler

I recently discovered that my Kepler GPU was not maintaining Max Boost, when iRacing, or when crunching a distributed computing work unit that had low GPU Usage. GPU-Z and Precision-X reported that it was being throttled down due to a utilization limit (Util), meaning it was trying to conserve power during a low-GPU-Usage scenario. However, I wanted to make sure that I was getting as much performance as possible from my GPU, even in these low-GPU-Usage scenarios. Fortunately, I have found a workaround!

Regarding "Max Boost", note that it is not the same as the "Boost" clock found in NVIDIA Inspector or Precision-X.
When a Kepler graphics card is marketed, it has:
- a listed "GPU clock" (which is the Base 3-d clock rate, essentially the minimum rate it'll run for any 3-d game/app)
- a listed "Boost clock" (which is the average expected 3-d boosted clock rate)
- an unlisted "Max Boost clock" (which is the maximum the GPU will upclock to, before being limited by voltage thresholds)

So, what the workaround does is: Instead of allowing the driver full control of upclocking and downclocking (where it downclocks when the GPU Usage is below some threshold) using Performance State P0 (which does not allow a specified clock rate), we instead FORCE it to use Performance State P2 (non-boosting 3D), along with a specified P2 clock rate equal to Max Boost, thus effectively forcing Max Boost!

This is only recommended if you want to ensure that you are getting absolute maximum performance all of the time. I'm not even sure if it has an effect on iRacing performance, but I would guess that it does. If you perform this workaround, a side affect will be that the GPU clock rate stays at maximum. This means that the idle power usage will increase a bit, but that is the tradeoff you will have to decide to make or not.

1) Verify that you have a Kepler GPU. You can do this by opening up NVIDIA Inspector, selecting your GPU from the dropdown at the bottom, and then looking near the top left for the "GPU" field. If it starts with GK, then it is a Kepler. Mine happens to be a GK104. If you don't have a Kepler, you should not proceed.
2) Determine your NVIDIA Inspector GPU Index. This is the value in parantheses within the NVIDIA Inspector dropdown at the bottom. We'll need this GPU Index value later.
3) Determine your Max Power Target. Open Precision-X, and determine the maximum value it will let you set your "Power Target" setting to. We'll call this Max Power Target, and we will use it later.
4) Monitor Precision-X. Set up Precision-X so that you can easily monitor the "GPU clock, MHz" value. This is the running clockrate of your GPU. It will normally vary, depending on things like GPU temperature (you setup that fan curve earlier, right?) and GPU usage.
5) Apply full load. Put full load (near-100% GPU Usage) on the GPU. I recommend opening GPU-Z, clicking the question mark "?" button at the middle right, then clicking "Start Render Test"
6) Determine your Max Boost. While under full load, monitor the "GPU clock, Mhz" value for a few seconds. Assuming you are under the 70ºC temperature thresholds, it should be at its maximum value. This is your "Max Boost" value. Make a note of that value. GPU-Z should be reporting "PerfCap Reason" value of "VRel, VOp", meaning that you are only limited by voltages. You can now close the program that put full load on the GPU, but keep running Precision-X to monitor the clock rate.
7) Instruct NVIDIA Inspector to execute the workaround. We want to: force a specific clock rate for the P2 power state, force the P2 power state to always be used, and force a power target level.
So, open a Command Prompt, and type the following command, substituting a couple values:

"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" -setGPUClock:*,2,XXXX -forcePState:*,2 -setPowerTarget:*,QQQ
... substituting your "GPU Index" for each *
... substituting your ["Max Boost" + 1] for the XXXX values -- I had to add 1 to make it work correctly for me
... substituting your "Max Power Target" for the QQQ values
... Note: If you are uncomfortable with setting a higher Power Target, you can set it to 100. But I've had no problems setting it to Max Power Target.

So, for instance, since my GPU Index is 2, my Max Boost is 1241, and my Max Power Target is 140, my command line ends up becoming:
"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" -setGPUClock:2,2,1242 -forcePState:2,2 -setPowerTarget:2,140

Note: If you want to see a list of all the supported command line parameters for NVIDIA Inspector, you can type:
"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" /?

This command will need to be applied every time your restart your computer. So, if you don't want to forget about it, you could:
- Open Notepad, paste the command, and save it as a .bat file located somewhere you can remember
- In your Windows Startup folder, place a shortcut to the .bat file
My Windows startup folder happens to be:
C:\Users\jacob_000\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup
- Restart Windows, and make sure the program runs after you login. If User Account Control is on, you'll get a prompt that you must click Yes to, every time you restart. You can then use Precision-X, to verify that it correctly set your clock rate and Power Target values.

To undo the workaround, you can:
- Restart the PC, and don't run the workaround (My testing indicates that this workaround is completely undone by a restart.)
- Execute the command that reverts the changes (It will restore all PStates, use "16" to not force any given PState, and set the power target level back to 100%):
"c:\Program Files\nVidia Inspector\nvidiaInspector.exe" -restoreAllPStates:* -forcePState:*,16 -setPowerTarget:*,100
... substituting your "GPU Index" for each *
- You could even create a .bat file, for that command, to easily be able to undo the workaround.

Note: If you mess with any other NVIDIA Inspector overclocking settings or command line settings, it is possible that a driver reinstallation may be necessary.

For more information on this workaround, here is the link where I originally found it (Note: They also go into overclocking, which I do not recommend)
http://www.overclock.net/t/1267918/guide-nvidia-inspector-gtx670-680-disable-boost-fixed-clock-speed-undervolting

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36377 - Posted: 18 Apr 2014 | 8:42:49 UTC - in response to Message 36320.

Three days ago I experienced a worse situation with the last full driver (335.23 WHQL); GPU0 (GTX770) not only stopped boosting, it dropped to and stayed at a lower power level ~549MHz. Even after completing different tasks it stayed there. Suspending and resuming work didn't resolve the issue. The power usage was low (~27% IIRC) but the GPU usage was high (>80%).

I'm now trying 337.50 (without the work around) and so far so good - GPU0 is boosting to 1267MHz. However these issues were fairly rare on my setup, across several driver releases. So it could take a few weeks before the issue naturally presents itself again. I have tried snoozing the GPU and resuming again, but that didn't trigger an issue. The type of task running might be of some concern. The Power usage difference between some present task types is around 10%. Stems from a 14% GPU usage variation.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Post to thread

Message boards : Number crunching : Maintaining Max Boost MHz on Kepler GPU using NVIDIA Inspector

//