Advanced search

Message boards : Graphics cards (GPUs) : Dead graphics card

Author Message
EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35894 - Posted: 26 Mar 2014 | 1:39:29 UTC
Last modified: 26 Mar 2014 | 1:46:16 UTC

I do not speak English, sorry for language errors.

I have an Asus Nvidia GTX 650 TI Boost graphics card, the card failed after only five months to process for GPUGrid and being useless. As surely still in warranty will achieve that the seller replace me with a new one just like it or another model of similar abilities and little price difference.
If I get a new NVIDIA graphics i want to continue processing GPUGRID, but would do so without forcing the graphics card as much as I did with the previous graphics card.

Can anyone tell me how I can get GPUGRID units are processed using only 60% or 65% of the total capacity of the graphics card? will achieve so perhaps reduce the risk of damage to the new NVIDIA graphics card.


Thank you.

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35896 - Posted: 26 Mar 2014 | 2:37:25 UTC - in response to Message 35894.

I use EVGA Precision software http://www.evga.com/precision/ to control both EVGA and Gigabyte cards. Supposed to work with all Nvidia cards. In one case which does not have good airflow, I run one card at the 90% power setting. This will auto undervolt (and underclock as a result) the card a little bit. This is one simple way to reduce the stress on the card.

The big thing to reduce the stress on the card without slowing it down is to keep it cool. Increase the fan speed on the card (can do a custom fan profile with the EVGA software). Note other people use MSI Afterburner which I have not used, but I imagine does all the same.

Also, good airflow through the case. Or you can go liquid cooling.

I do not like letting my cards run >72. Not necessarily a magic number, but the cards will last longer and have fewer errors.

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35978 - Posted: 28 Mar 2014 | 23:40:13 UTC - in response to Message 35896.

Hello

I use EVGA Precision software http://www.evga.com/precision/ to control both EVGA and Gigabyte cards. Supposed to work with all Nvidia cards. In one case which does not have good airflow, I run one card at the 90% power setting. This will auto undervolt (and underclock as a result) the card a little bit. This is one simple way to reduce the stress on the card.

The big thing to reduce the stress on the card without slowing it down is to keep it cool. Increase the fan speed on the card (can do a custom fan profile with the EVGA software). Note other people use MSI Afterburner which I have not used, but I imagine does all the same.

Also, good airflow through the case. Or you can go liquid cooling.

I do not like letting my cards run >72. Not necessarily a magic number, but the cards will last longer and have fewer errors.


Hello

I use the GPU Tweak software, which also allows you to control the fan speed, but mostly I've kept in "auto". I do not know which is the optimal operating temperature of a graphics card but I guess that values ​​below 65 or 66 degrees Celsius would be the safest.
Could you explain to me how I can do this?: "I run one card at the 90% power setting. This will auto undervolt (and underclock as a result) the card a little bit. This is one simple way to reduce the stress on the card."
Before I had a very bad air flow, I have now changed the case and now I have good air flow but also interested me to protect my new graphics card with your simple way to reduce the stress on the card.


Tanks for your answer.

Jeremy Zimmerman
Send message
Joined: 13 Apr 13
Posts: 61
Credit: 726,605,417
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwat
Message 35979 - Posted: 29 Mar 2014 | 0:52:13 UTC - in response to Message 35978.

In the EVGA Precision software it is as easy as sliding a bar. Older cards have the "linked" unchecked and unable to check such as the 460 (could not set temp targets). So I just slide the Power Target to 90% in above situation. I have since moved it to 105% as I fixed the airflow problem I had with my case situation.

The default Fan Curve is for a real quiet (meaning hot) card. Click the Fan Curve to change it up a bit.

On my newer 780Ti cards, I unlink the power and temp target and leave Power at 105%, Temp at 72, and Prioritize Temp. That way, it will run full boost for SANTI WU's, and will throttle back on a combination of warm days and NOELIA WU's to keep it from going past 72 (even when fans have moved to 100%) if needed.



ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 35985 - Posted: 29 Mar 2014 | 13:21:16 UTC

That's some very good advice from Jeremy! I'm running my GTX660Ti at a reduced power target (108 W instead of 130 W - but the software will only show percentages). In additiioin to this I'm applying a 50 MHz GPU overclock, as I know the card can still take it. This way I'm saving power and running more energy-efficient due to the lower voltage automatically applied. In addition to this I have the memory OC'ed slightly, as this boosts performance somewhat while costing pretty much nothing.

Generally It's very probably just bad luck that your preevious card failed so early. There's always a "U" curve when plotting failure rate over runtime: some early failures not caught by the manufacturer, and increased failure rate nearing the end of life.

In your case you very probably got one of those early failing chips (I also had one once) and it would have failed relatively early without GPU-Grid too. What we have been talking about can not protect you from such a chip, but if you have a normal one it will push the "end of life" time point further away. And make you run more energy-efficient ;)

BTW: you could ask to get a GTX750 or GTX750Ti instead as replacement. These use the new Maxwell architecture and are significantly more power efficient than Keplers (like your previous card).

MrS
____________
Scanning for our furry friends since Jan 2002

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36011 - Posted: 30 Mar 2014 | 23:20:49 UTC
Last modified: 30 Mar 2014 | 23:22:44 UTC

Hello

I still do not know which will be the graphics card with the seller to replace my GTX 650 TI Boost, will give me a list of graphics cards available and I will have to choose one. When I get my new graphics card i return to this forum to write some more doubts. :)

I have some more questions:

Does anyone know what is the average time that a PC can live processing data from Boinc? because my graphics card worked steadily to 90% of its capacity and the processor at 80% of its capacity, and it also puts stress on the motherboard, the memory and other components.

Does anyone know if in the future GPUGrid also worked with graphics cards AMD Radeon? If this happens it would be interesting.



Thanks for the help.

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 36015 - Posted: 31 Mar 2014 | 8:28:28 UTC - in response to Message 36011.

My home server has been crunching with BOINC for several years uninterrupted. Never had any problem whatsoever, even in the hot Greek summers.

Pretty soon I will be crunching for GPUGrid for a year on my GTX 650 Ti full time, 24/7. No problem with that too.

If your computer is well-built, you will not have a problem with crunching.
Well-built means in general:

    High-quality components (motherboard, memory, CPU heatsink, GPU)
    High-quality, high-efficiency power supply.
    Very good airflow


____________

mikey
Send message
Joined: 2 Jan 09
Posts: 291
Credit: 2,038,916,115
RAC: 10,332,146
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36024 - Posted: 31 Mar 2014 | 13:26:43 UTC - in response to Message 36011.


I have some more questions:

Does anyone know what is the average time that a PC can live processing data from Boinc? because my graphics card worked steadily to 90% of its capacity and the processor at 80% of its capacity, and it also puts stress on the motherboard, the memory and other components.

Does anyone know if in the future GPUGrid also worked with graphics cards AMD Radeon? If this happens it would be interesting.

Thanks for the help.


As Vagelis says "High-quality, high-efficiency power supply" and "Very good airflow" are two very important keys. "High quality components" though sort of depends on how often you want to get your hands dirty working to keep the pc running. I personally fix pc's for friends and get their old parts instead of money in return, those are alot of the harddrives etc I use for my machines, they tend to crash more often then new ones, I also buy refurbished harddrives and they too tend to crash more often then brand new ones. But I do monthly backups of my Boinc only machines so if they crash I am only down for a couple of hours and then right back up again with the only thing being lost are the units.

I have been crunching 24/7 since 1999 and although those original machines are no longer crunching, I do have machines that are at least 5 years old and still going strong. Most of my pc's, I have 15 here at home, have gpu's in them and all of them crunch 24/7, most of the time even when I am on vacation. While Boinc is stressful it is not doing anything to the chip that it wasn't designed to do in the first place. Overclocking and all that kind of stuff can damage a chip, but as you are reading about those that do that stuff here at Boinc you will also see that they are not willy nilly overclocking their chips, they are doing it with care and thought. They are not trying to just get max performance, they are trying to get better performance intelligently.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36053 - Posted: 1 Apr 2014 | 20:30:12 UTC - in response to Message 36011.

Does anyone know if in the future GPUGrid also worked with graphics cards AMD Radeon? If this happens it would be interesting.

No. They tried to in the past, but results were bad. Things have probably improved, but the GPU-Grid app is comparably complex, so porting it to OpenCL (as would be neccessary for AMD GPUs) and afterwards maintaining 2 separate code paths (OpenCL and CUDA) would require significantly more man-power. Which they'd rather use for science, as long as nVidia GPUs are enough for them. And dropping CUDA entirely in favor of OpenCL is surely not an option either, as this would make the nVidias run slower and less efficient.

MrS
____________
Scanning for our furry friends since Jan 2002

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36054 - Posted: 1 Apr 2014 | 23:08:16 UTC
Last modified: 1 Apr 2014 | 23:09:25 UTC

Hello

The answers you have given me I have been very useful. At the moment I have only one pc to BOINC processing (currently do not have any, because the only one I have still not even your graphics card). I've never done oveclock, but once I thought, I prefer to use the components as manufactured.
When my graphics card stopped working and send to the seller by it's warranty, people who worked with it had asked me if overclocked, which I never did or will, I think like you.

I have another question: does anyone know if this GPUGrid planned to complete its work in the future and become a retired project? because there are not many projects that use graphics cards ...


Thank you all for responding.

Stefan
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 5 Mar 13
Posts: 348
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 36056 - Posted: 2 Apr 2014 | 8:50:28 UTC - in response to Message 36054.

Nah, as long as we can simulate we will :) There are no plans for closing down GPUGRID

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36057 - Posted: 2 Apr 2014 | 11:12:50 UTC - in response to Message 36054.

There are actually several projects that use GPUs. Off the top of my head, I am connected to the following projects that issue GPU tasks:
- GPUGrid (almost always has GPU tasks available)
- Poem@Home (rarely has)
- World Community Grid (rarely has)
- Einstein@Home (always has)
- Albert@Home (always has)
- Milkyway@Home (always has)
- SETI@Home (always has)
- SETI Beta (always has)
... and I'm sure there are more that I'm not connected to.

There is no harm in connecting to several projects (I'm connected to 30), and configuring Resource Shares and GPU Exclusions so that you keep all your resources busy, while prioritizing exactly which projects you want them to run on. The flexibility of BOINC is awesome.

mikey
Send message
Joined: 2 Jan 09
Posts: 291
Credit: 2,038,916,115
RAC: 10,332,146
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36059 - Posted: 2 Apr 2014 | 12:19:59 UTC - in response to Message 36057.

There are actually several projects that use GPUs. Off the top of my head, I am connected to the following projects that issue GPU tasks:
- GPUGrid (almost always has GPU tasks available)
- Poem@Home (rarely has)
- World Community Grid (rarely has)
- Einstein@Home (always has)
- Albert@Home (always has)
- Milkyway@Home (always has)
- SETI@Home (always has)
- SETI Beta (always has)
... and I'm sure there are more that I'm not connected to.

There is no harm in connecting to several projects (I'm connected to 30), and configuring Resource Shares and GPU Exclusions so that you keep all your resources busy, while prioritizing exactly which projects you want them to run on. The flexibility of BOINC is awesome.


DistRTgen is one that you missed, so are Moo, Asteroids, PrimeGrid and Collatz. Although not ALL projects can use AMD cards. For credits DistRTgen absolutely hands down pays the most, but it favors the high end cards the most too. Each project is different and each uses your gpu slightly differently, some get close to 90+% usage on just one unit, some you can run multiple units at once on with no problems.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36072 - Posted: 2 Apr 2014 | 23:34:10 UTC - in response to Message 35979.

On my newer 780Ti cards, I unlink the power and temp target and leave Power at 105%, Temp at 72, and Prioritize Temp. That way, it will run full boost for SANTI WU's, and will throttle back on a combination of warm days and NOELIA WU's to keep it from going past 72 (even when fans have moved to 100%) if needed.


Jeremy,

Thanks for this tip! I've only ever used Precision X for monitoring the cards so the settings were at default, but I tried your tweaks and instantly saw some improvement from one card which was always powering down to base clock speeds. It is now properly applying boost as it should.

____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36080 - Posted: 3 Apr 2014 | 21:22:19 UTC - in response to Message 36059.
Last modified: 3 Apr 2014 | 21:22:35 UTC

DistRTgen is one that you missed, so are Moo, ... PrimeGrid and Collatz.

And *some* will question the scientific value of these projects ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 36084 - Posted: 4 Apr 2014 | 6:29:27 UTC - in response to Message 36080.

DistRTgen is one that you missed, so are Moo, ... PrimeGrid and Collatz.

And *some* will question the scientific value of these projects ;)

MrS


Those projects might be backed by power companies ;-) Some of them don't provide any value, just waste power...

mikey
Send message
Joined: 2 Jan 09
Posts: 291
Credit: 2,038,916,115
RAC: 10,332,146
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36085 - Posted: 4 Apr 2014 | 11:02:30 UTC - in response to Message 36084.

DistRTgen is one that you missed, so are Moo, ... PrimeGrid and Collatz.

And *some* will question the scientific value of these projects ;)

MrS


Those projects might be backed by power companies ;-) Some of them don't provide any value, just waste power...


I wasn't trying to do an evaluation, just provide the names of some other gpu projects.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36086 - Posted: 4 Apr 2014 | 11:06:19 UTC

I agree with Mikey. Let's let the users decide what they want to attach to. I had just provided a list of ones I was quite familiar with, because I run them. Mikey added others that I was less familiar with, because I don't run them.

Nobody asked for an evaluation. I may agree with some of the sentiments that were stated, but I don't think they should have been stated.

Long story short: There are plenty of BOINC GPU projects out there to keep a GPU busy. :)

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36090 - Posted: 5 Apr 2014 | 0:06:53 UTC

Hello

They have already given me a list of graphics cards that I get to replace the Nvidia GTX 650 TI Boost I had and unfortunately in this list are not included the graphics card I expected to find , they offer me are these:

* ASUS GeForce GTX660 TI DirectCU II OC 2GB DDR5 PCI -E

* EVGA GeForce GTX660 2GB DDR5 ACX Dual PCI -E
 
* GIGABYTE GeForce GTX660 TI 2GB PCI -E

I leave some questions:

The motherboard I have is an Asus but i can not remember the model, do you think the EVGA card can work well for Boinc on an Asus motherboard ? ( GPUGrid and other projects ) ? I do not know this graphics card and i can not find it on the site EVGA Latinoamerica : http://latam.evga.com/products/prodlist.asp?family=GeForce % 20Series % 20Family % 20600 , or the EVGA site of USA : http://www.evga.com/Products/ProductList.aspx?type=0&family=GeForce+600+Series+Family&chipset=GTX+660

Another question: any of these three graphics cards can run on a pc with an AMD FX- 8350 processor and with a power supplie of 850 Watts 80 + Gold? I worry consumption can be plotted Asus first listed , I'm not sure that the power supplie is able to supply it with the processor working at 100 % capacity .


Thank you all.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36092 - Posted: 5 Apr 2014 | 1:23:21 UTC - in response to Message 36090.
Last modified: 5 Apr 2014 | 1:32:31 UTC

do you think the EVGA card can work well for Boinc on an Asus motherboard ?

The brands of the motherboard and graphics card don't make a difference as long as they're compatible. I use EVGA cards and an ASUS motherboard in my rig.

Another question: any of these three graphics cards can run on a pc with an AMD FX- 8350 processor and with a power supplie of 850 Watts 80 + Gold?

850W should be enough depending on what else you're powering. I previously ran two GTX680s and i7-3770k on a 750W power supply with no problems.

Here's a link to a pretty good calculator for determining PSU needs:
http://extreme.outervision.com/psucalculatorlite.jsp
____________

mikey
Send message
Joined: 2 Jan 09
Posts: 291
Credit: 2,038,916,115
RAC: 10,332,146
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36095 - Posted: 5 Apr 2014 | 11:23:35 UTC - in response to Message 36090.

Hello
Another question: any of these three graphics cards can run on a pc with an AMD FX- 8350 processor and with a power supplie of 850 Watts 80 + Gold? I worry consumption can be plotted Asus first listed , I'm not sure that the power supplie is able to supply it with the processor working at 100 % capacity .
Thank you all.


I was reading the other day and found that the difference between Gold power supplies and Bronze ones is the heat they put out. Gold ones put out less heat then Bronze ones, I ALWAYS thought it had to do with the quality of the unit, but it doesn't, at least not in the way I thought.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36099 - Posted: 5 Apr 2014 | 13:29:09 UTC - in response to Message 36095.

Hello
Another question: any of these three graphics cards can run on a pc with an AMD FX- 8350 processor and with a power supplie of 850 Watts 80 + Gold? I worry consumption can be plotted Asus first listed , I'm not sure that the power supplie is able to supply it with the processor working at 100 % capacity .
Thank you all.


I was reading the other day and found that the difference between Gold power supplies and Bronze ones is the heat they put out. Gold ones put out less heat then Bronze ones, I ALWAYS thought it had to do with the quality of the unit, but it doesn't, at least not in the way I thought.


The reason the Bronze put out more heat than Gold is because they are not as electrically efficient, and Platinum is more efficient than Gold so there is another drop in heat output there as well for a given load. The better the quality of the power supply, the more of the electrical current that is put to use powering your computer and the less that is converted to/lost as heat. This should decrease the total power you're drawing from the wall over time. That's my understanding of it.

https://en.wikipedia.org/wiki/80_Plus
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36101 - Posted: 5 Apr 2014 | 14:40:02 UTC - in response to Message 36090.

In my opinion EVGA are the best graphic cards around, and they work with any motherboard.
But perhaps you can pay a bit of extra money and then get a 750Ti?
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36110 - Posted: 5 Apr 2014 | 21:43:17 UTC - in response to Message 36101.
Last modified: 5 Apr 2014 | 21:52:38 UTC

For GPUGrid a GTX660Ti is faster than a GTX660.

An 850W Gold+ PSU should be capable of supporting a GTX660Ti (150W TDP).

I ran two GTX470's on a system with a 550W PSU for a year without issue. The reason being that GPUGrid WU's don't use the full 215W. More like 70 to 75% (301W to 322W). The rest of the system probably used ~100W, so that kept me a bit under the 80% PSU Wattage rating mark. Going over it tends to challenge the PSU more and efficiency drops off.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36129 - Posted: 6 Apr 2014 | 23:41:24 UTC
Last modified: 6 Apr 2014 | 23:43:42 UTC

Hello

I want to get a Nvidia GTX 750 or GTX 750 TI graphics card or Maxwell equivalent to GTX 660 or GTX 660 TI, but the seller does not include them in the list of graphics cards available, I have to choose one of the three that are in the list, if I could choose a graphics card made ​​in 2014 would invest a little more, but I am forced to buy a graphics card made ​​in 2012, as will be for one year, because in 2015 I plan to buy a new pc, I'll buy the EVGA GTX 660 now, which has the lowest price.

Does anyone know if the graphics card EVGA GeForce GTX660 2GB DDR5 PCI-E Dual ACX have two fans as Asus, MSI or Gigabyte equivalent?

I am asking many questions. :)

Thank you all for the help.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36130 - Posted: 6 Apr 2014 | 23:55:20 UTC - in response to Message 36129.

Hello

I want to get a Nvidia GTX 750 or GTX 750 TI graphics card or Maxwell equivalent to GTX 660 or GTX 660 TI, but the seller does not include them in the list of graphics cards available, I have to choose one of the three that are in the list, if I could choose a graphics card made ​​in 2014 would invest a little more, but I am forced to buy a graphics card made ​​in 2012, as will be for one year, because in 2015 I plan to buy a new pc, I'll buy the EVGA GTX 660 now, which has the lowest price.

Does anyone know if the graphics card EVGA GeForce GTX660 2GB DDR5 PCI-E Dual ACX have two fans as Asus, MSI or Gigabyte equivalent?

I am asking many questions. :)

Thank you all for the help.


Yes, the EVGA card with ACX cooling has two axial fans as opposed to a single radial fan. These fans are typically quieter than a single radial fan, but will dump more heat into your computer case.

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36131 - Posted: 7 Apr 2014 | 0:04:12 UTC

Hello

Change of plans, what you think about the GIGABYTE GeForce GTX660 TI 2GB PCI-E? the price difference is minimal with respect to the EVGA.


Thank you all for the help.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36132 - Posted: 7 Apr 2014 | 0:11:18 UTC - in response to Message 36131.

Hello

Change of plans, what you think about the GIGABYTE GeForce GTX660 TI 2GB PCI-E? the price difference is minimal with respect to the EVGA.


Thank you all for the help.


skgiven has put together a good chart of the relative performance of various cards on this project. Looks like the 660Ti outperforms 660 as one might expect.

http://www.gpugrid.net/forum_thread.php?id=1150&nowrap=true#35696

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36169 - Posted: 8 Apr 2014 | 20:21:03 UTC

GTX660/Ti aren't so bad. It doesn't matter that they were introduced in 2012, as even the current GTX760 and GTX770 use the same chip, just in a different configuration. Apart from GTX750/Ti anything you can buy today from nvidia is either Kepler (just like GTX660/Ti) or even still a Fermi (some low end models). And if in a year the bigger Maxwells are far better than the card you're getting now you can still sell it as relatively new.

Mikey wrote:
I wasn't trying to do an evaluation, just provide the names of some other gpu projects.

I know. And I thought Jacobs list was pretty decent because it featured the - from my point of view - most useful projects. Call it unintentional evaluation by him, as he's surely not participating in projects he doesn't find as useful.

And I added that smiley to hint at "there is potential for quite some discussion here" without starting to go into details. Just trying to make people, who may not be as familiar with the projects contents, aware that there are large differences.

And I must confess that it's very important to me to warn people of Moo. There is absolutely no useful knowledge to be gained from running this project - and this is an objective fact. I accept the value of all other GPU projects to be up to debate and personal preferences.

MrS
____________
Scanning for our furry friends since Jan 2002

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36171 - Posted: 8 Apr 2014 | 21:08:03 UTC - in response to Message 36169.
Last modified: 8 Apr 2014 | 21:16:13 UTC

I know. And I thought Jacobs list was pretty decent because it featured the - from my point of view - most useful projects. Call it unintentional evaluation by him, as he's surely not participating in projects he doesn't find as useful.


To be honest, I like the scientific research of protein analysis. I prefer GPUGrid because they pay great credits, but I think I also equally prefer World Community Grid (they had a Help Conquer Cancer GPU project that was awesome), and POEM (which hardly ever has GPU tasks).

Those projects keep my GTX 660 Ti and my GTX 460 busy.

And then there's my GTS 240. It can't do much, for sure, but... it CAN do Einstein/Albert/SETI/SETIbeta (all 4 are projects that I'd normally not do). So, I have those 4 projects resource share set to 0 (so I otherwise won't get tasks, especially CPU tasks), and then whenever my GTS 240 needs a new task, it gets it from one of those 4 projects.

For those 4 projects, Einstein/Albert/SETI/SETIbeta, I used to have the resource share at 1, so server requests could get more than a single task at a time. And I also had GPU Exclusions in place to not ever crunch them on my 2 beefier GPUs... but then GPUGrid ran out of work recently, the main GPUs went idle, and I had a sadface. So... I juggled things around a bit, such that they are not excluded from my 2 beefier GPUs, but now use a resource share of 0, so they'll run on them if they can't get work from GPUGrid and POEM and WCG.

MilkyWay is another project that has GPU tasks, but I actually want to do some of their CPU work (I'm a BOINC Alpha tester, and MilkyWay is one of the only projects to offer MT multi-threaded tasks)... so I keep that project's resource share at the same as my other CPU projects, and then have it configured to not get NVIDIA work.

Oh, and because World Community Grid WCG actually has several sub-projects, I have its resource share set at 4 times any of my normal CPU projects, so they do 4 times the work/REC/RAC.

Oh again, one other note.. Because I am a BOINC Alpha tester, and VM work is still ongoing, I also go out of my way to attach to any of the VM projects, including: RNA World, Test4Theory, Climate@Home, and Beauty@Home. So, I am regularly crunching RNA World tasks (one of them is at 1250 hours, and still going strong!), and Test4Theory. I've actually helped file and solve some serious bugs with Oracle, to make the latest versions of VirtualBox work smoothly for our BOINC community.

With enough patience, and testing, and configuring... BOINC can run any project you want, in almost any way you want.

Are we having fun? Thanks for reading.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36215 - Posted: 10 Apr 2014 | 20:22:02 UTC - in response to Message 36171.

Are we having fun? Thanks for reading.


Haha, sure! And thanks for sharing.

MrS
____________
Scanning for our furry friends since Jan 2002

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36286 - Posted: 15 Apr 2014 | 2:33:58 UTC
Last modified: 15 Apr 2014 | 3:25:01 UTC

Hello

Finally I have a EVGA NVidia Geforce GTX 660 graphic card , now I have another problem: How am I supposed to register in the site of EVGA to download the EVGA Precision software? because every time I try it returns me a message saying "You must fill in all the required fields", It´s supposed that I have to complete all my data? because at the right of the data to be completed can be read "Optional" and for that reason I have not completed.


Tanks.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36287 - Posted: 15 Apr 2014 | 2:42:24 UTC

What fields did you fill in?

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36290 - Posted: 15 Apr 2014 | 3:23:56 UTC

Hello

I fill the fields Country, Desired Username, Password, Confirm Password and the Captcha.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36292 - Posted: 15 Apr 2014 | 4:33:40 UTC

You must populate every field except "App/Suite". Also, this thread has nothing to do with Precision-X. Finally, questions regarding Precision-X support should be directed to their forums, not here. Hope this helps, good luck.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36296 - Posted: 15 Apr 2014 | 9:02:24 UTC - in response to Message 36286.

I had once trouble with ordering at EVGA site via Internet Explorer, filled in several times, but failed. Then used FireFox and it worked at once.
The problem with IE was a restriction in the AntiVirus software, adding EVGA site to the "safe" list should have solved the issue.
Perhaps one of the above helps you.
____________
Greetings from TJ

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36305 - Posted: 15 Apr 2014 | 18:43:35 UTC
Last modified: 15 Apr 2014 | 18:45:45 UTC

Hello

Is there any other way to get the EVGA Precision software http://www.evga.com/precision/ than signing up on the EVGA site? Is that ask me too much personal information , I did not give any personal information to get the Asus GPU Tweack.


Tanks. Excuse me for asking so many questions.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36306 - Posted: 15 Apr 2014 | 18:54:50 UTC - in response to Message 36305.
Last modified: 15 Apr 2014 | 19:03:24 UTC

You may use garbage information. In fact, to test the login, I subscribed abcdefg@abcdefg.com up.
Signing up an account is the only way to get software from EVGA.

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36307 - Posted: 15 Apr 2014 | 20:24:17 UTC

I have already completed the registration but the system did not send me the email with an activation code, i try again and tells me to use a new user name because he had chosen and was used. The only thing I'm missing to process Boinc project units is this software to control the EVGA video card.


:(

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36308 - Posted: 15 Apr 2014 | 20:33:16 UTC - in response to Message 36307.

The only thing I'm missing to process Boinc project units is this software to control the EVGA video card.


You don't actually need that software to begin crunching on your new card. You only need it if you want to tweak the default settings (clock/memory speed, fan curve, etc.).

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36310 - Posted: 15 Apr 2014 | 23:11:53 UTC

Hello

I want to have the software to monitor the performance of the card and to reduce the voltage as you suggested me in this topic.
I already lost the previous graphics card and I want to reduce the risk of losing this card.


Thanks for the reply.

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36311 - Posted: 15 Apr 2014 | 23:34:16 UTC
Last modified: 15 Apr 2014 | 23:34:55 UTC

I have already downloaded the EVGA Precision X and I have it installed, I have successfully changed the skin so everything works fine.

Can anyone tell me how to lower the voltage at 90% and "save" the changes to the card always work well?
I've set the fan speed to "automatic by software."


Tanks.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36312 - Posted: 16 Apr 2014 | 0:33:10 UTC
Last modified: 16 Apr 2014 | 0:34:44 UTC

Setting the "Power Target" value to a number below 100%, will instruct the GPU to not ramp its clock all the way, when being utilized. So, maybe try 75% Power Target, and hit Apply, if you are that worried?

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36313 - Posted: 16 Apr 2014 | 1:21:58 UTC

I do not know how much risk involves processing data GPUGrid project or other projects using the graphics card and I do not know how often is that a graphics card is damaged by processing data from Boinc.
What do the users of this forum to reduce the risk of damaging your graphics card?


Thank you.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36314 - Posted: 16 Apr 2014 | 1:32:01 UTC - in response to Message 36313.

In the time I've been crunching for BOINC, I've used 7 graphics cards in two machines and have never had any of them damaged. As long as you're operating them within safe limits and keeping them as cool as you can, you should be fine. As a general rule for temperature, try keeping them below 75C if they will be running for long periods of time but ideally below 70C. The lower the better. Do some research on your new card to find out all you can about its capabilities and limits. Google is your best friend for that.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36315 - Posted: 16 Apr 2014 | 1:38:35 UTC
Last modified: 16 Apr 2014 | 1:55:52 UTC

I've been running BOINC also for several years, using 5 different GPUs. I prefer to push them as hard as they will go, without adjusting any of their factory-preset voltages. (So, for instance, I will set the Power Target % to 140%, so it'll up-clock as much as it can within voltage tolerance, regardless of Power usage).

The only GPU problem I had was when I bent a fan blade, while cleaning a GPU fan. It made a high-pitch whirring sound from that point.

But I've personally not had any problems running the cards as hard as they will go [without adjusting voltages, and without adjusting clock rates].

Keeping the temps below 70*C is best. I set a custom fan curve in Precision-X, so that it will set the fan at full-speed by the time it gets to 69*C.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36316 - Posted: 16 Apr 2014 | 2:08:35 UTC - in response to Message 36315.

Keeping the temps below 70*C is best. I set a custom fan curve in Precision-X, so that it will set the fan at full-speed by the time it gets to 69*C.


How long does it stay at 100%? I've always tried to keep the fan no higher than 80% to save wear on those parts. Am I being overly cautious?

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36317 - Posted: 16 Apr 2014 | 2:23:20 UTC - in response to Message 36316.
Last modified: 16 Apr 2014 | 2:33:05 UTC

My eVGA GTX 660 Ti 3GB FTW... has a fan that only allows settings of 30% to 80%, as witnessed by the yellow dashed lines on the fan curve. My fan curve is set so that it reaches max fan (80%) right before thermal limiting temp (70*C), so... 80% at 69*C. To answer your question, it's usually at 80% 24/7, because I run a hot computer. And I've not experienced any harm running my fans like that.

In fact, even with those settings, I was having trouble keeping it below 70*C. So, I've recently written a program to have my system fan ramp up in accordance to my 660 Ti's temperature. It works nicely (I take a lot of pride in it), and now my GPU stays below 70*C (meaning it'll stay at Max Boost 1241 MHz, without dipping down 13 MHz to 1228 MHz)

For reference, I have 2 other GPUs in the system, a GTX 460, and a GTS 240. The GTX 460 can do fan settings of 30% to 100%, and the GTS 240 can do fan settings of 35% to 100%. Neither of these 2 GPUs support "boosting" or "thermal down-throttling", so they will run at full Mhz regardless of my fan setting. Nevertheless, my fan curve does not simply stop at 69*C. I have another point on the curve set at 85*C at 100% fan, so that those other 2 GPUs can ramp up beyond 80% fan, up to an all-out fan setting at 85*C (what I consider to be the beginning of the danger zone for a GPU). Temps are usually between 70*C and 77*C on them, so fan speeds are usually around 80%-90%. 24/7. No problems with fan wear and tear, just a bit noisy.

I don't know if you're being over-cautious. But I would feel pretty comfortable pushing the components as hard as they will go without overclocking or overvolting. (So, maybe you ARE being overly cautious)

Hope this helps!

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36318 - Posted: 16 Apr 2014 | 2:36:34 UTC - in response to Message 36317.

Thanks, Jacob. That's a lot of good info.

My EVGA GTX 780Ti Superclocked cards will go to 100% on the fan curve. Currently I'm running the new NATHAN_RPS and SDOERR_BARNA WUs on these. One card is holding 70C at 80% fan and the other 71C at 81% fan. About 85% GPU utilization each. The room temp is 14C. The 780Ti seems to run a bit warmer under higher loads than my two EVGA GTX 680 FTWs did in the same box before I swapped them out.

I'll try adjusting the curve more to see if I can get them to stay below 70.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36319 - Posted: 16 Apr 2014 | 2:41:23 UTC - in response to Message 36318.
Last modified: 16 Apr 2014 | 3:07:48 UTC

I'm trying to think, in my head, what would happen if you set your fan curve to be at 100% at 69*C. I guess it would mean that the fans would work harder, and your end result would be cooler GPUs, which would be overworking the fans unnecessarily. Hmm... Not sure what I'd do in your scenario. You might want to just try that setting, to see what the end result is -- I'm very curious. Or maybe, since they "usually do okay at around 80% fan", you could try setting a point at 67*C 75%, then another point at 69*C 100%.

But here's a tip that'll help performance:

There's a separate thread on maintaining Max Boost, if you want to keep it clocked at Max Boost even when the stupid drivers stupidly think the utilization is "not high enough". In it are instructions to create a .bat file, that you can set to run at Windows startup, to ensure maximum performance. If you're going to run the GPU(s) 24/7 anyway, it makes sense to ensure they are at Max Boost, for maximum performance.

Original Forum post:
http://www.gpugrid.net/forum_thread.php?id=3647
Post summarizing the .bat file:
http://www.gpugrid.net/forum_thread.php?id=3647#35562
Post detailing exact procedures for the .bat file:
http://www.gpugrid.net/forum_thread.php?id=3647#36320

EMYArg: I apologize for sounding so rude earlier. If Precision-X doesn't properly set your custom Power Target % every Windows startup, you could read through the instructions in those 3 links, to create a .bat file that does.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36321 - Posted: 16 Apr 2014 | 3:27:37 UTC - in response to Message 36319.

On these cards the base clock is 980MHz. GPU0 at full boost goes to 1124MHz and GPU1 goes to 1137MHz at full boost. (My 680s did the same thing: 1 boosted about 1 13MHz "step" more than the other.) GPU1 also seems to run consistently 2 - 3 degrees cooler as well.

Currently I'm running on 335.23 with a Power Target in Precision X at 106% (the max for me) and a temp target of 72C. Under these conditions I've not had any trouble with downclocking below full boost when under the Temp Target. I had the "under-utilization" problem with earlier drivers but since I've gone with my current driver I haven't seen the problem come back.

I read those threads and was set to implement the suggestions but then, as I said above, the problem went away. Thanks for putting in the effort on that, though. It seems to have helped a number of people already.

Currently a fan speed of 94% is holding GPU0 at 70C at full boost. GPU1 is at 68C with 88% fan and also at full boost. Both are still averaging around 85% utilization with NATHAN_RPS and SDOERR_BARNA WUs.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36322 - Posted: 16 Apr 2014 | 3:37:17 UTC - in response to Message 36321.
Last modified: 16 Apr 2014 | 3:43:52 UTC

I'm jealous of your GPUs :) I would love to have a 700-series, not only for the performance, but to learn more about Boost 2.0 (with the Temperature Target). From what I understand, the user gets to choose one of the target types to be active, and the other target type is completely ignored. But maybe I'm wrong. NVIDIA's webpage is horribly lacking in terms of describing the functionality. http://www.geforce.com/hardware/technology/gpu-boost-2

I'm told that, with the Titan's at least, they actually don't start thermal downclocking from Max Boost, until the temp reaches 80*C. So, that's another thing to keep in mind. You could very easily test this, by unsetting "Auto Fan" in Precision X, and then manually setting the fan to lower speeds to watch the temp go up, and then monitor the temperature where you first see the card downlock. Usually it's about 2*C above the thermal drop, so... it'll start downlocking around either 72*C or 82*C. But then, it stores a "history" of temps, and won't upclock unless it thinks it won't get tripped again. So that's why I recommend max fan at 1 degree below the thermal limiting temp.

So... If you're up for testing it, I'd just be curious to know if a 780 Ti starts thermal-downclocking at 70-72*C, or if it's 80*C-82*C instead. You don't have to do the test if you'd prefer not to, but I generally think anything up to 85*C is safe.

Regarding that... One time, I had suspended BOINC and unset Auto on Precision-X, and manually set a low fan %, so I could sleep in the room. When I woke up, I found that I had used BOINC snooze by accident (so BOINC kicked on an hour later), and my GPUs were COOKING. They were actually at the "critical" thermal limit -- Precision-X was reporting 100*C on the GTX 660 Ti, and instead of the 1241Mhz max boost or 1024Mhz normal 3D, it was clocked at something ridulous like 365Mhz. I'm glad that extra safety net was there; that was a true scare. I'm never ever ever ever going to uncheck the "Auto" fan curve again, unless I'm doing a test and will remember to put it back on.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36323 - Posted: 16 Apr 2014 | 3:47:46 UTC - in response to Message 36322.
Last modified: 16 Apr 2014 | 3:48:36 UTC

I'm almost positive the thermal dropoff for a 780Ti is 80*C, not 70*C. So, in terms of performance, you could get away with a curve that has 100% fan at 79*C, and it wouldn't thermal downclock. But, if you wanted to keep the GPUs even cooler (for less risk of work unit errors maybe?), you could put the 100% point even lower (like at 74*C, 70*C, or 69*C). That's my opinion at least.

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36324 - Posted: 16 Apr 2014 | 3:48:23 UTC
Last modified: 16 Apr 2014 | 4:10:25 UTC

Hello

I could not complete the download of any GPU GPUGrid unit, even if I have come GPUGrid CPU (CPU Only App 1.05 ) units , I did not know existed in these units to process CPU. The units I try to download so far have been only four , all of them have been in " Download: retry in HH : MM : SS " but never finishes downloading , at this time there is a acemd.815 -42 unit. exe in that state but never finishes downloading .
When units of several projects started to download Avast Free began to block them , I accept them one to one but I think one of them mistakenly select the option that prevents downloading the file, maybe that's the problem , Avast may be blocking the download and for that reason never ends.
As I can not get GPUGrid units , I downloaded some Nvidia Prime Grid units at this time are being processed but I do not know what percentage of the capacity of the graphics card is being used because I still do not understand the operation of the interface monitoring software, EVGA Precision , but the temperature of the graphics card is maintained in 47 degrees Celsius and the speed of fans of the graphics card is kept in 48% .

Edit: Now, in the section of transfers Boinc Manager, download the GPU unit slope is marked as "Stoped Project".


Thank you all for the answers.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36325 - Posted: 16 Apr 2014 | 4:31:34 UTC - in response to Message 36324.

EMYArg: I don't use Avast so I'm not sure what to do in order to successfully download and run GPUGrid tasks. I've never had any trouble with AVG and BOINC projects.

Precision-X won't give you a good idea of the actual utilization of your card by various projects and tasks you're crunching for them. For real-time monitoring I would recommend Nvidia Inspector. It's also a very powerful tweaking tool in its own right once you know more about your hardware.

Jacob: Yes, I can independently set separate Power and Temp Targets and tell Precision-X which to prioritize. The default Temp Target is 82C, so I believe you are right about the raised threshold of GK110.

I'll run those tests you suggested sometime in the next day or two probably and let you know what I find.

mikey
Send message
Joined: 2 Jan 09
Posts: 291
Credit: 2,038,916,115
RAC: 10,332,146
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36326 - Posted: 16 Apr 2014 | 11:38:15 UTC - in response to Message 36324.


Avast may be blocking the download and for that reason never ends.

Thank you all for the answers.


Go into your Avast program and exclude the Boinc folders to keep Avast from checking them. IF there is a virus then it WILL try to get out of the Boinc folders and get caught by Avast, if it is a false positive, as is most likely, you will be fine. Either way the only thing directly affected will be Boinc and if it gets a virus as long as the only one it connects to is the project servers it isn't your problem.

As far as gpu crunching I too have been doing it a LONG time and have never adjusted the software beyond the defaults and have never burnt up a video card. I HAVE had some get so gunky the fan slowed down or stopped and it overheated, but the machine shut itself down as opposed to burning up the gpu. I was able to free the fan and get back to crunching with no problems. You can also buy aftermarket fans for your gpu's, but I have not done that either, although the last few gpu's I have bought do have multiple fans on them instead of just a single one. I use both Nvidia and AMD gpu's and on some projects crunch more then one unit at a time, while at other projects one unit at a time keeps the gpu busy.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36328 - Posted: 16 Apr 2014 | 13:23:25 UTC

My eVGA GTX 660 Ti 3GB FTW... has a fan that only allows settings of 30% to 80%, as witnessed by the yellow dashed lines on the fan curve. My fan curve is set so that it reaches max fan (80%) right before thermal limiting temp (70*C), so... 80% at 69*C. To answer your question, it's usually at 80% 24/7, because I run a hot computer. And I've not experienced any harm running my fans like that.

Right and we are talking here about an EVGA GTX660, I have two of those and the maximum of the fan is 75% while it actually runs at 74% max. No other program like Precision X, MSI afterburner or Asus GPUTweak can get the fan at 75%. With more rigs in the room the ambient temperature will rise quite quickly when outside becomes warmer and thus is my first EVGA 660 often running at 74°C and 76-78° with an SDOER_BARNA. However after a year 24/7 no issues with the cards, besides a lot of error with SANTI's, but I will replace soon with one GTX780Ti. As it can do little more in the same time as 2 660's
____________
Greetings from TJ

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36330 - Posted: 16 Apr 2014 | 17:37:23 UTC
Last modified: 16 Apr 2014 | 17:38:06 UTC

Hello

Avast Free was blocking the download of the GPUGrid units but i already solved the problem, I have a unit "Long Runs" running and according to the EVGA Precision and Tthrottle the temperature of the graphics card is 50 degrees Celsius, and the fans of the graphic card work at 50 %.
I think that 50 degrees celcuis of temperature are perfect, what do you think?


Tanks to all.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36332 - Posted: 16 Apr 2014 | 18:04:35 UTC - in response to Message 36330.
Last modified: 16 Apr 2014 | 18:05:19 UTC

I think that Temperature, GPU Usage, and Power Usage, are all 3 factors that you should care about, if you are worried about wear and tear on your GPU.

If you want to somehow LIMIT how hard your GPU works, you should do it using the POWER TARGET % value, setting it to something below 100%. This value determines "how hard" the GPU is allowed to use its resources to do work while maintaining the fans. The real thing it is determining is how many watts of energy are allowed to be used by the GPU, to do the work and run the fans. But it's the best thing to manipulate.

Whatever you do (limit or no limit)... if you keep the temperature below 70*C (by setting a custom fan curve in Precision-X), and you stay away from any options regarding over-clocking or over-volting... then your GPU should be perfectly fine.

Regards,
Jacob

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36333 - Posted: 16 Apr 2014 | 18:16:05 UTC
Last modified: 16 Apr 2014 | 18:17:47 UTC

I am processing without having changed anything related with voltages or other values, the graphics card is processing with the settings that it had when it came out of its box, so I am surprised that this is processing at 50.3 degrees Celsius when the graphics card Asus Nvidia Geforce GTX 650 TI Boost that i had before was never below 70 degrees celcuis.


Tanks.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36397 - Posted: 18 Apr 2014 | 18:01:06 UTC - in response to Message 36333.

Regarding 50°C: either your card is not working fully (e.g. because the CPU might be too busy to serve it) or your cooling solution is a bit over-engineered. For your GPU I'd expect a GPU utilization of 85 - 95% running GPU-Grid. If it's significantly below that something is holding your card back. If your cooling is better than usually aimed for by the manufacturer (70 - 80°C).. be happy :)

MrS
____________
Scanning for our furry friends since Jan 2002

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36467 - Posted: 20 Apr 2014 | 22:37:52 UTC

I don't know what percentage of the graphics card is being used to process units, the EVGA Precision software does not give that information or i have seen. I have a AMD FX-8350 and Boinc Manager configured to use the 87.5 % of the processor cores. The tasks "Long Runs" of GPUGrid end in approximately 11 hours.
Never before i had a GTX 660 graphics card so i don't know if 11 hours are the right thing for what is expected of the graphics card, but I assume that if they are.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36469 - Posted: 20 Apr 2014 | 23:01:17 UTC - in response to Message 36467.
Last modified: 20 Apr 2014 | 23:12:21 UTC

Somewhere in the ballpark of 9-11 hours sounds about right.

You can use Precision-X's "Monitoring" button (or, in the settings, the "Monitoring" tab) to show values to monitor (set the Graph checkmark to checked for various things you want to monitor). Then click OK, then click the arrow at the bottom of Precision-X to expand the Performance Log, and then double-click it to see all of the the monitors configured to be shown.

I recommend watching:
- Clock (the current actual Mhz clock of the GPU)
- Temperature (in *C)
- Usage (this is GPU Usage, a primary way to see how hard the GPU is worked)
- Memory Usage (how much GPU VRAM is used)
- Fan Speed (What % the GPU Fan is running)
- Power (Current Power %; remember, limiting this by setting a certain Power Target %, is the main way to limit how hard the GPU will work)
- Power Limit (Indicates if the GPU is currently being limited by a Power Target %)
- Utilization Limit (Indicates if the GPU is currently not boosting because utilization [GPU Usage %] is not consistently high enough to warrant additional boost)

You can also use GPU-Z to monitor those values. The sensors in GPU-Z are a bit easier to view quickly.

I personally have the temperature monitors configured, via Precision-X, to be tray icons, so I can easily see the temperatures of my 3 GPUs :) ... so I immediately know if the temp exceeds the 70*C thermal limit; nobody messes with my Max Boost!

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36470 - Posted: 20 Apr 2014 | 23:43:52 UTC
Last modified: 20 Apr 2014 | 23:48:12 UTC

I already have installed the GPU-Z "GPU Load" is 87 %, i see it as a correct value. In the EVGA Precision X i have all the values to monitor enabled, but I don't know where it shows the percentage of use of the GPU.


Thank you for responding.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36471 - Posted: 20 Apr 2014 | 23:57:22 UTC - in response to Message 36470.
Last modified: 21 Apr 2014 | 0:06:33 UTC

To show the "Hardware Monitor" window:
- Expand the tray at the bottom of Precision-X by clicking the little downward pointing arrow
- Click the "Performance Log" button, to bring the graphs into view
- Double-click within the graph area
- Use the mouse to hover, to see what the past values were

Here is a picture of my Precision-X Performance Log window, called "Hardware Monitor":
https://707a3a.bn1304.livefilestore.com/y2pMdNRA9bnfh-6Vfo7o_UK6twNVeCjeT1ETzdRIj5kZk2aW8ArTVP1XFe66EwOtEShiJAWty3GZzWFkmgw84o6ojL5puQgQshL98ORnCpLQJo/20130523.png?psid=1

Here's what the main window should look like, when you have the tray graphs visible:
https://8e7a3a.bn1.livefilestore.com/y2pn0F3cHBvGuNgh1brGapZ9J6U3IbiIhSNvHozrxSHWXwaNti-OKWjmCHmBCEAEoDs7bP7A9-3ar9qA7c-P39lGc-9PtHM0MBZylFSgWSoQss/20130523.png?psid=1

On that image, the 4th graph is the Usage %.

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36472 - Posted: 21 Apr 2014 | 0:06:13 UTC
Last modified: 21 Apr 2014 | 0:07:01 UTC

I've already been able to see it, shows a value of use of the GPU from 86% to 88 %, the same as GPU-Z. I had not understood your explanation by problems of language, for that reason couldn't find it. After your last response i've seen.


Thank you for responding.

Jacob Klein
Send message
Joined: 11 Oct 08
Posts: 1127
Credit: 1,901,927,545
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36473 - Posted: 21 Apr 2014 | 0:08:07 UTC - in response to Message 36472.

Ah, so are you good now?

GPU Usage % = GPU Load = GPU Utilization.
All 3 are the same thing.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36475 - Posted: 21 Apr 2014 | 10:27:39 UTC

EMYArg, seem like your GPU is running well! And this temperature is excellent for air cooling. If the GPU fails under such conditions, it was very likely a chip which would have failed early anyway (seldom but not impossible). I wrote about this somewhere in the beginning of this thread.

MrS
____________
Scanning for our furry friends since Jan 2002

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36613 - Posted: 24 Apr 2014 | 14:55:18 UTC
Last modified: 24 Apr 2014 | 14:56:20 UTC

I am still processing normally. Winter is now beginning, I have to wait until November to begin the warm days of summer, in those days I will know how good is the cooling that i have.


Thanks to all of you for responding. All the doubts that i had are now resolved.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36631 - Posted: 25 Apr 2014 | 9:44:43 UTC - in response to Message 36613.

If at 20°C ambient temperature the GPU runs at 50°C it's a good estimate that it will run at 60°C in 30°C ambient temperature. This assumes constant fan speeds, which might not be the case.

MrS
____________
Scanning for our furry friends since Jan 2002

EMYArg
Send message
Joined: 5 Apr 12
Posts: 32
Credit: 381,502,763
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36663 - Posted: 26 Apr 2014 | 4:28:37 UTC
Last modified: 26 Apr 2014 | 4:40:28 UTC

Hello

I'm using one of these cases: http://www.sentey.com/en/bx1-4237-v21 although I have 7 fans installed, I have only the 2 side fans working to introduce air and the rear fan working with the top two fans pulling air. One of the top fan does not work and will be replaced in the coming days.
I'm processing with the processor about 2 or 3 degrees of temperature below the graphics card, the processor is cooled by a Coolermaster Gemin II S524.
Also I replaced the original power supply that cabinet had installed by a power supply Sentey 80 + Gold 850 watts.


Thanks to all of you for responding.

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,500,112,138
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 36664 - Posted: 26 Apr 2014 | 7:56:23 UTC - in response to Message 36663.

Hello

I'm using one of these cases: http://www.sentey.com/en/bx1-4237-v21 although I have 7 fans installed, I have only the 2 side fans working to introduce air and the rear fan working with the top two fans pulling air. One of the top fan does not work and will be replaced in the coming days.
I'm processing with the processor about 2 or 3 degrees of temperature below the graphics card, the processor is cooled by a Coolermaster Gemin II S524.
Also I replaced the original power supply that cabinet had installed by a power supply Sentey 80 + Gold 850 watts.


Thanks to all of you for responding.


You could drop your CPU temps significantly using a better cooler like the Coolermaster Hyper 212 EVO or the Xigmatek SD128264 Aegir.

I use the Xigmatek on my main systems and at full load the core temps are only 17 DegC higher than idle temps on my 1090T's running push/pull fans on it.

Post to thread

Message boards : Graphics cards (GPUs) : Dead graphics card

//