Advanced search

Message boards : Graphics cards (GPUs) : Low power GPUs performance comparative

Author Message
Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 39
Credit: 1,220,212,626
RAC: 878,842
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52615 - Posted: 11 Sep 2019 | 17:33:53 UTC

I live in Canary Islands.
Nice weather to go to beach everyday along the year.
But it is not so nice to high power electronic devices.
I´ve set a particular power boundary at 120 Watts for graphics cards I purchase, in order them to process most of the time without risk of spontaneous ignition.
No plans for air conditioning so far.

Several days ago, Toni released a limited amount of ACEMD3 test WUs.
Thank you very much, Toni. Linux users were waiting for this!
I realized that ACEMD3 test WUs might be an objective mean of testing and compare my currently working GPUs between them.
I´ve been lucky to catch at least one TONI_TESTDHFR206b WU for every GPU I have currently in production under 64 bits Linux OS, SWAN_SYNC enabled.
And here is a table comparing their performance.



Where:
GPU: Graphics card type. All of them are factory overclocked models.
Series: Graphics card micro-architecture.
Power: Card maximum rated power, as indicated by nvidia-smi command, in Watts.
Temp.: Maximum temperature reached while processing WU, as indicated by Psensor application, in ºC, at a room temperature of 26 ºC.
Time: Execution time for each WU, in seconds.
R.P. GTX: Relative performance, comparing execution time between each graphics card and the others.

Erich56
Send message
Joined: 1 Jan 15
Posts: 597
Credit: 3,093,170,244
RAC: 1,722,134
Level
Arg
Scientific publications
watwatwatwatwatwat
Message 52617 - Posted: 11 Sep 2019 | 18:25:53 UTC

ServicEnginIC, thanks for that! Good work :-)

rod4x4
Send message
Joined: 4 Aug 14
Posts: 100
Credit: 1,600,547,169
RAC: 1,288,641
Level
His
Scientific publications
watwatwatwatwatwatwat
Message 52620 - Posted: 12 Sep 2019 | 0:02:21 UTC - in response to Message 52615.
Last modified: 12 Sep 2019 | 0:36:33 UTC

Nice work!
Very informative.
I guess you don't spend much time at the beach with all this GPUgrid testing!!

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 39
Credit: 1,220,212,626
RAC: 878,842
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52633 - Posted: 17 Sep 2019 | 22:19:00 UTC

I guess you don't spend much time at the beach with all this GPUgrid testing!!

+1 ;-)

Theese tests confirmed me what I previously suspected:
GPUGrid is the most power exigent Project I currently process.
Really GPUGrid tasks do squeeze GPU's power till its maximum!
Here is nvidia-smi information for my most powerful card while executing one TONI_TESTDHFR206b WU (120W of 120W used):



And here are Psensor curves for the same card from GPU resting to executing one TONI_TESTDHFR206b WU:

rod4x4
Send message
Joined: 4 Aug 14
Posts: 100
Credit: 1,600,547,169
RAC: 1,288,641
Level
His
Scientific publications
watwatwatwatwatwatwat
Message 52634 - Posted: 18 Sep 2019 | 0:00:10 UTC - in response to Message 52633.

GPUGrid is the most power exigent Project I currently process.

Agreed.

here are Psensor curves for the same card

Always interesting to see how other volunteers run their systems!
What I found interesting from your Psensor chart:
- GPU running warm at 80 degrees.
- CPU running cool at 51 degrees (100% utilization reported)
- Chassis fan running at 3600rpm. (do you need ear protection from the noise?)

As a comparison, my GTX 1060 GPU on Win10 processes a89-TONI_TESTDHFR206b-23-30-RND6008_0 task in 3940 seconds. Your gtx 1660 ti is completing test tasks quicker at around the 1830 second mark.
The Turing cards (on Linux) are a nice improvement in performance!

Toni
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 9 Dec 08
Posts: 825
Credit: 4,301,782
RAC: 146
Level
Ala
Scientific publications
watwatwatwat
Message 52636 - Posted: 18 Sep 2019 | 10:46:02 UTC - in response to Message 52634.
Last modified: 18 Sep 2019 | 10:47:12 UTC

Thanks for the data.

There may be a problem with the WINDOWS Cuda 10.1 app - it's slower than it should be. We are working on it.

SWAN_SYNC is ignored in acemd3.

rod4x4
Send message
Joined: 4 Aug 14
Posts: 100
Credit: 1,600,547,169
RAC: 1,288,641
Level
His
Scientific publications
watwatwatwatwatwatwat
Message 52639 - Posted: 18 Sep 2019 | 11:46:20 UTC - in response to Message 52636.


There may be a problem with the WINDOWS Cuda 10.1 app - it's slower than it should be. We are working on it.

SWAN_SYNC is ignored in acemd3.

Thanks Toni. Good to know.

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 39
Credit: 1,220,212,626
RAC: 878,842
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52640 - Posted: 18 Sep 2019 | 11:50:14 UTC

- CPU running cool at 51 degrees (100% utilization reported)

CPU for this system is a low power version Q9550S, with high performance CPU cooler Arctic Freezer 13.
And several degrees in CPU temperature are rised by radiated heat coming from GPU...

- GPU running warm at 80 degrees.

I had to work very hard to maintain GPU temperature below 80's at full load for this particular graphics card.
But perhaps it is matter for other thread...
http://www.gpugrid.net/forum_thread.php?id=4988

Profile ServicEnginIC
Avatar
Send message
Joined: 24 Sep 10
Posts: 39
Credit: 1,220,212,626
RAC: 878,842
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52658 - Posted: 19 Sep 2019 | 7:48:11 UTC

For comparison:
Here is the same GTX1660TI card mentioned below, now running Einstein@Home WUs due to Toni's current requirement for not to run acemd3 test WUs in Linux systems for better testing in Windows ones.





Spikes in temperature graph are the transition between two consecutive E@H WUs.
Comparing data, GPU power usage drops from 120W to 83W, and accordingly temperature from peak 80ºC to 69ºC.
Toni, this speaks very well about your acemd3 Linux code optimization for getting from GPU its maximum.
Now I'm crossing fingers for your success with Windows one!

mmonnin
Send message
Joined: 2 Jul 16
Posts: 265
Credit: 647,845,139
RAC: 792
Level
Lys
Scientific publications
wat
Message 52685 - Posted: 21 Sep 2019 | 1:49:31 UTC

E@H is most mining like BOINC project I've seen in that it responds better from memory OC than core OC. Try a math project that can easily run parallel calculations to push GPU temps.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2048
Credit: 14,841,615,219
RAC: 2,811,106
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 52707 - Posted: 23 Sep 2019 | 21:57:59 UTC - in response to Message 52636.

SWAN_SYNC is ignored in acemd3.
Well, the CPU time equals the Run time, so could you elaborate this?
Could someone without SWAN_SYNC check their CPU time and Run time for ACEMD3 tasks please?

mmonnin
Send message
Joined: 2 Jul 16
Posts: 265
Credit: 647,845,139
RAC: 792
Level
Lys
Scientific publications
wat
Message 52710 - Posted: 24 Sep 2019 | 0:10:31 UTC - in response to Message 52707.
Last modified: 24 Sep 2019 | 0:10:42 UTC

SWAN_SYNC is ignored in acemd3.
Well, the CPU time equals the Run time, so could you elaborate this?
Could someone without SWAN_SYNC check their CPU time and Run time for ACEMD3 tasks please?


CPU Time = Run time with the new app.
https://www.gpugrid.net/result.php?resultid=21402171

Post to thread

Message boards : Graphics cards (GPUs) : Low power GPUs performance comparative