Advanced search

Message boards : News : Tests on GTX680 will start early next week [testing has started]

Author Message
Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24095 - Posted: 23 Mar 2012 | 9:42:40 UTC

We are looking forward to testing the new nvidia architecture. We will report the performance soon and really thank one anonymous cruncher for the donation.

gdf

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24124 - Posted: 24 Mar 2012 | 5:19:48 UTC

As I am going to do a new PC build later this year, I'll be interested in these results. I have seen several "gamer" oriented reviews that also tested "compute" capabilities, and I was not impressed by the compute results. All compute tests except for one were slower than previous gen cards. So, I still hold out hope that the GTX 680 will perform better than the previous gen cards.

As a summary of what I have read:

Power consumption is down quite a bit - TDP is around 195W under load.
Games run faster.

Its not much, I know, but that pretty much sums it up. ;)
____________

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24133 - Posted: 24 Mar 2012 | 18:06:17 UTC - in response to Message 24124.

... I'll be interested in these results.

Just like we all are.

KING100N
Send message
Joined: 27 Mar 09
Posts: 2
Credit: 113,212,081
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 24149 - Posted: 25 Mar 2012 | 16:22:47 UTC
Last modified: 25 Mar 2012 | 16:23:10 UTC

I have a bad feeling

WARNING: The GTX 680 is **SLOWER** than the GTX 580

frankhagen
Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24150 - Posted: 25 Mar 2012 | 18:37:18 UTC - in response to Message 24149.

I have a bad feeling

WARNING: The GTX 680 is **SLOWER** than the GTX 580


THAT's on an LLR-app which heavily relies on 64-bit capabilities.

it was pretty obvious right from the start, that GK-104 design was not planned for that.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24151 - Posted: 25 Mar 2012 | 19:05:06 UTC - in response to Message 24150.
Last modified: 25 Mar 2012 | 19:06:52 UTC

The GTX680 has 100% more flops and 30% more transistors than the GTX580. We would be happy to have something in between those numbers.

Due to the fact that it requires cuda4.2 which is still experimental we are thinking to release it as a test application.

gdf

frankhagen
Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24152 - Posted: 25 Mar 2012 | 19:41:16 UTC - in response to Message 24151.

The GTX680 has 100% more flops and 30% more transistors than the GTX580. We would be happy to have something in between those numbers.

Due to the fact that it requires cuda4.2 which is still experimental we are thinking to release it as a test application.


???

PG-sieve apps which are still cuda 2.3 do work. maybe not using the full potential, but they produce valid results.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24154 - Posted: 26 Mar 2012 | 9:07:19 UTC - in response to Message 24152.

With cuda4.2 I think that we will be within the window of performance I indicated before.

gdf

MarkJ
Volunteer moderator
Volunteer tester
Send message
Joined: 24 Dec 08
Posts: 738
Credit: 200,909,904
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24160 - Posted: 27 Mar 2012 | 6:48:30 UTC - in response to Message 24154.

With cuda4.2 I think that we will be within the window of performance I indicated before.

gdf


I thought cuda 4.2 was still under NDA. Would a cuda 4.1 app be a reasonable compromise? At least 4.1 is publicly released so we can beta test.

Besides given all the driver sleep bug issues people are not likely to have a 4.2 capable driver installed at the moment.
____________
BOINC blog

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24162 - Posted: 27 Mar 2012 | 8:05:22 UTC - in response to Message 24160.

cuda4.2 is publicly available from nvidia forums but not widely advertised.
You need the latest drivers to run on gtx680.

gdf

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24210 - Posted: 2 Apr 2012 | 15:58:18 UTC - in response to Message 24162.

We have now two GTX680 installed locally and we are now testing the BOINC app.
Thanks for the donations.

gdf

Munkhtur
Send message
Joined: 13 Nov 10
Posts: 3
Credit: 105,044,879
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 24234 - Posted: 4 Apr 2012 | 3:25:00 UTC - in response to Message 24210.

any result?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24260 - Posted: 5 Apr 2012 | 14:39:50 UTC - in response to Message 24234.

Due to Easter holidays, we have a stop now. The BOINC cuda4.2 application is being tested though.
gdf

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24284 - Posted: 6 Apr 2012 | 19:02:43 UTC

Just got my 680 in, unfortuantely I have to have it on Windows for gaming, and would like to know whether or not I should attach it to the project yet? Don't want to be returning a whole bunch of invalid or errors.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24286 - Posted: 6 Apr 2012 | 19:22:27 UTC - in response to Message 24284.

I'm not sure what the situation is. Gianni indicated that he might release a Beta app, and the server status shows 17 beta tasks waiting to be sent. It's been like that for a few days. These might be for Linux only though?

You need the latest drivers, and as far as I am aware, the GTX680 will not run normal or Long tasks. So I suggest you attach to the project, configure a profile to only run Beta's and see if any come your way.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24288 - Posted: 6 Apr 2012 | 20:55:39 UTC

It appears that they are Linux only. If I wasn't running out of drive space, i would give this rig the dual-boot, since I now know how to configure it. Don't feel like using usb, since im running wcg currently. Might go pick up a larger SSD in order to accomodate this weekend.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24298 - Posted: 7 Apr 2012 | 12:25:41 UTC - in response to Message 24288.

Just keep Windows. We use Linux because it is easier for us, but we should be having a Windows application soon after.

gdf

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24300 - Posted: 7 Apr 2012 | 13:48:28 UTC

First off let me apologize for the "tone" of my written voice, but after spending 6 hours last night trying to install ubuntu i can say I HATE the "disgruntled" GRUB. Windows 7 install refused to play nice with it. Kept getting grub-efi failed to install on /target error, as well as MANY others. Even went to the trouble of disconnecting my Windows SATA connection, but still kept getting same error on fresh drive. Do to the fact that it is Easter Weekend (and the habit of wanting betas ala WCG), I have decided to unistall Windows 7 in order to accomplish my goal. Since this is mainly a crunching rig (0 files stored internally, I keep all on external encrypted HDDs), and besides the one game 1 game I play (which was ruined due to a recent "patch"), having Windows does nothing for me ATM. Should have this uninstalled shortly, and hopefully with it unistalled maybe GRUB will not be so grumpy (as well as me).


Happy Easter

P,S, Isn't learning fun. ;)

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,605,311,851
RAC: 8,715,209
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24301 - Posted: 7 Apr 2012 | 14:00:14 UTC - in response to Message 24286.

I'm not sure what the situation is. Gianni indicated that he might release a Beta app, and the server status shows 17 beta tasks waiting to be sent. It's been like that for a few days. These might be for Linux only though?

You need the latest drivers, and as far as I am aware, the GTX680 will not run normal or Long tasks. So I suggest you attach to the project, configure a profile to only run Beta's and see if any come your way.

In order to crunch anything at all - beta or otherwise - you need both a supply of tasks and an application to run them with.

The standard BOINC applications page still seems to work, even if like me you can't find a link from the redesigned front page. No sign of a Beta application yet for either platform, which may take some time pressure off the OS (re-)installs.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24302 - Posted: 7 Apr 2012 | 14:15:06 UTC
Last modified: 7 Apr 2012 | 14:22:25 UTC

Literally getting ready to de-install before you posted that....... Is that because those beta wu's can't be sent to anyone with the designated platforms unless they have a 680 though, meaning they don't want them to go to people who have the other apps, but without the proper GPU to run them?

EDIT

I wouldn't think they would even bother loading betas unless they were ready to go out. Meaning why bother loading them, if you're still testing in house. I would ASSUME it may not be listed in apps, for reasons stated above. Even though it is odd that nothing is listed, maybe that's just because the app doesn't matter, since this beta is related to hardware?

http://www.gpugrid.net/forum_thread.php?id=2923#24181

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24304 - Posted: 7 Apr 2012 | 14:46:32 UTC - in response to Message 24302.

The beta WUs are from before, they don't go out because there is no beta app now.

1) we will upload a NEW application for linux faster for any fermi card and it will work for a gtx680
2) it will be compiled with cuda4.2
3) some days later the same app will be provided for windows
4) later there will be an optimized app for gtx680 for linux and windows

Note that we are testing a new app, new cuda and new architecture. Expect some problems and some time. Within 10 days, we should have 1 and 2. Some variation on the plan are also possible. We might put out a cuda3.1 new app for instance.


gdf

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24305 - Posted: 7 Apr 2012 | 14:58:13 UTC

Thank you for detailed post. MUCH appreciation.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24398 - Posted: 12 Apr 2012 | 1:43:39 UTC
Last modified: 12 Apr 2012 | 1:44:17 UTC

Any progress?
Could you please share some information about the performance of the GTX 680 perhaps?
I'm afraid that the CPU intensive GPUGrid tasks will suffer much more performance penalty on GTX 680 than on GTX 580 (and on the other CC2.0 GPUs). Maybe an Ivy Bridge CPU overclocked to 5GHz could compensate this penalty.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24399 - Posted: 12 Apr 2012 | 5:48:10 UTC

Besides that fact Zoltan, which from what I can tell, the CPU is basically what's crippling the 680 across the board for every project.

However, I've been steadily re-reading several lines from Anandtech in-depth review about the card itself:

1)Note however that NVIDIA has dropped the shader clock with Kepler, opting instead to double the number of CUDA cores to achieve the same effect, so while 1536 CUDA cores is a big number it’s really only twice the number of cores of GF114 as far as performance is concerned.

So if I am correct, and its 12:22 am so give me a break if im wrong, but since we use shader clock, what this means is that if you were to double the cores of 580 to 1024 you would be operating at 772 Mz. (set rops and everything aside, as crazy as that sounds). You know, I can't figure this math out, but what I will say, as posted earlier the primegrid sieve ran 25% faster on 680 (240s vs 300s) Which to me, I just keep looking at the fact that's roughly the same difference between the 1005Mz clock and the 580's 772. Don't really know where i was going with this, or how i was going to get there, but is that why sieve increased by 25%, and there's also the 20% decrease in TDP. Compared to 580, it has 1/3 more cores than 580 (1536 vs 1024), but a 1/3 less ROPS.

Again sorry for confused typing, its late, but that 25% increase in clock just kept staring at me. My bet goes to the 25% increase until the optimized app comes out to take more advantage of CC3.0

Good night, and I can't wait to see what happens.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24400 - Posted: 12 Apr 2012 | 7:07:44 UTC - in response to Message 24398.
Last modified: 12 Apr 2012 | 7:08:04 UTC

The kepler optimized application is 25% faster than a gtx580 regardless of the processor for a typical WU. I don't see why the CPU should have any different impact between compared to Fermi.

gdf

Any progress?
Could you please share some information about the performance of the GTX 680 perhaps?
I'm afraid that the CPU intensive GPUGrid tasks will suffer much more performance penalty on GTX 680 than on GTX 580 (and on the other CC2.0 GPUs). Maybe an Ivy Bridge CPU overclocked to 5GHz could compensate this penalty.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24402 - Posted: 12 Apr 2012 | 12:33:18 UTC
Last modified: 12 Apr 2012 | 12:40:58 UTC

: ). If they would have kept that ROP at 48 as with the 580, it would have been 50% faster though , but 25% sounds good to me. Keep up the good work guys can't wait til it's released.

EDIT. Are you guys testing on pci 2 or 3, I've heard additional increases are coming from this, from what I've seen roughly 5% on other sites.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24410 - Posted: 12 Apr 2012 | 20:01:11 UTC - in response to Message 24402.
Last modified: 12 Apr 2012 | 20:15:08 UTC

Compared to 580, it has 1/3 more cores than 580 (1536 vs 1024), but a 1/3 less ROPS.

A GTX 580 has 512 cuda cores and a GTX 680 has 1536.

CUDA is different from OpenCL. On several OpenCL projects high CPU requirement appears to be the norm.

I would expect a small improvement when using PCIE3 with one GPU. If you have 2 GTX680's in a PCIE2 system that drops from PCIE2 x16 to PCIE2 x8, then the difference would be much more noticeable, compared to a board supporting two PCIE3 x16 lanes. If you're going to get 3 or 4 PCIE3 capable GPU's then it would be wise to build a system that properly supports PCIE3. The difference would be around 35% of one card, on an PCIE3 X16, X16, X8 system compared to a PCIE2 X8, X8, X4 system. For one card it's not really worth the investment.

If we are talking 25% faster @ 20% less power, then in terms of performance per Watt the GTX680 is ~50% better than a GTX580. However that doesn't consider the rest of the system.
Of the 300W a GTX680 system might use, for example, ~146W is down to the GPU. Similarly, for a GTX580 it would be ~183W. The difference is ~37W. So the overall system would use ~11% less power. If the card can do ~25% more work then the overall system improvement is ~39% in terms of performance per Watt.
Add a second or third card to a New 22nm CPU system and factor in the PCIE improvements and the new systems performance per Watt would be more significant, perhaps up to ~60% more efficient.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24411 - Posted: 12 Apr 2012 | 20:54:44 UTC - in response to Message 24400.
Last modified: 12 Apr 2012 | 20:55:28 UTC

The kepler optimized application is 25% faster than a gtx580 regardless of the processor for a typical WU.

It sounds promising, and a little disappointing at the same time (as it is expected).

I don't see why the CPU should have any different impact between compared to Fermi.

Because there is already 25-30% variation in GPU usage between different type of workunits on my GTX 580. For example NATHAN_CB1 runs at 99% GPU usage while NATHAN_FAX4 runs at only 71-72%. I wonder how much the GPUGrid client could feed a GPU with as many CUDA cores as the GTX 680 has, while it could feed a GTX 580 to run at only 71-72% (and the GPU usage drops as I raise the GPU clock, so the performance is CPU and/or PCIe limited). To be more specific, I'm interested in how much is the GPU usage of a NATHAN_CB1 and a NATHAN_FAX4 on a GTX 680 (and on a GTX 580 with the new client)?

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24414 - Posted: 12 Apr 2012 | 23:01:01 UTC
Last modified: 12 Apr 2012 | 23:12:40 UTC

I brought the the core count up to 1024, instead of 512, since I kept trying to figure out the math for what the improvement was going to be. Meaning, if I doubled the core count, I could do away with the shader clock, as they did in kepler. (i know kepler was quadrupled, but in terms of performance it was just doubled) The math SEEMED to work out ok. So, I was working with 1024 cores working at core clock of 772 meant 1/3 more cores on 680 than 580 (adjusted for the doubled shader freq).This led to a difference in shader clock of 23.2% faster for Kepler (772/1005). Which meant (to me and my 0 engineering knowledge), a benefit of 56.6% (increase in amount of cores*increase in adjusted freq) However, since there are 1/3 less ROPs, that got me down to 23.4% (but if I'm not mistaken, the ROP freq. is calc. off core, and I learned this after adjusting for 570, 480 and 470, once i learned the ROP freq i quit trying)

What's weird, is that this math kept LOOKING correct the further I went. There was roughly a 45% increase compared to a 570 (as shown on sieve tasks), on a 480 in my math it showed an increase of roughly 35%, but compared to a 470 it jumped to 61%.

Again, not an engineer, just someone who had the day off. It strike me as odd though that it seemed to work. But, adding ROPs in may have been the mistake, I honestly don't even know how important they are for what we do. Meaning that since it is coorelated with pixel (again out of my league :) ) it could be like a high memory bandwitdh and not mean as much to us. The 25% increase and 45% were the ones that kept my math skills going, b/c that was what was seen on PPS sieve tasks.

Ah, coincidences..... ;) Oh, and I have been looking for mobo that support 3.0 @ x16,x16 but I think Ive only found one that did and it was like $300, however I refuse to get one that doesn't, merely b/c I want everthing at 100% (even if the extra bandwidth isn't used)

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24415 - Posted: 12 Apr 2012 | 23:23:24 UTC

One more thing. I'm assuming Zoltan meant, as he already explained in relation to GPUgrid WUs, that like Einstein apps, have we hit a."wall" to where the CPU matters more than gpu once you reach a certain point. As per his description some tasks are dependent on a fast CPU,someone in other forum is failing tasks because he has a 470 or a 480 can't remember, in a xeon @ 2.5, which is currently causing him issues.

Chatocl
Send message
Joined: 17 Aug 10
Posts: 1
Credit: 39,403,802
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 24417 - Posted: 13 Apr 2012 | 2:22:45 UTC - in response to Message 24415.

I have a 550 ti and my cpu AMD athlon X4 underclocked to 800 mhz and gpugrid uses only 10% of my cpu (runing in linux)

I doubt that cpu can be an issue, at least with gpugrid app in linux

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24419 - Posted: 13 Apr 2012 | 2:35:28 UTC

FYI. Half of your tasks error out.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24420 - Posted: 13 Apr 2012 | 3:41:48 UTC

Oh, and it's not whether or not they'll finish, it's about whether or not the CPU will bottleneck the GPU. I reference Einstein, b/c as mentioned, anything above a 560ti 448 will finish the task in roughly the same GPU time, and what makes the difference in how fast you finish the WU is based off of how fast your CPU is. This can SEVERELY cripple performance.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 155,858
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24421 - Posted: 13 Apr 2012 | 3:50:58 UTC - in response to Message 24162.

cuda4.2 is publicly available from nvidia forums but not widely advertised.
You need the latest drivers to run on gtx680.

gdf


Could you mention which of the new drivers allow using it without the sleep bug?

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24423 - Posted: 13 Apr 2012 | 4:00:38 UTC
Last modified: 13 Apr 2012 | 4:03:28 UTC

The newest R300 series doesn't have the sleep bug, but it is a beta. The 4.2 came out with 295, so it's either beta or wait til WHQL is released. The beta version is 301.24. Or if possible in your situation, you can tell Windows to Never turn off display. This prevents sleep bug, and you can do whatever you want.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 155,858
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24424 - Posted: 13 Apr 2012 | 4:10:19 UTC - in response to Message 24423.

Thanks. Now I'll go look for GTX680 specs to see if it will fit the power limits for my computer room, and the length limits for my computers.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24430 - Posted: 13 Apr 2012 | 20:50:07 UTC

Oh, if you're possibly getting a 680, use 301.10

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 155,858
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24433 - Posted: 14 Apr 2012 | 5:02:05 UTC - in response to Message 24424.

Thanks. Now I'll go look for GTX680 specs to see if it will fit the power limits for my computer room, and the length limits for my computers.


The GTX 680 exceeds both the power limit and the length limit for my computers. I'll have to look for later members of the GTX6nn family instead.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24436 - Posted: 14 Apr 2012 | 14:35:38 UTC

Just a friendly reminder about what you're getting with anything less than 680/670 The 660ti will be based off othe 550ti's board. Depending on each users power requirements. I would HIGHLY reccomend waiting for results from said boards, or would reccomend the 500 series. Since a 660ti will most likely have half the cores, and a 15% decrease in clock compared to 580, this could severely cripple other 600 series as far as crunching is concerned. Meaning, a 560Ti 448 and above will, IMO (I can't stress this enough), probably be able to beat a 660Ti when it's released. Again, IMHO. This is as far as speed is concerned. Performace/watt may be a different story, but a 660ti will be based off of a 550ti specs (keep that in mind)

As always, Happy Crunching

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24437 - Posted: 14 Apr 2012 | 21:29:42 UTC

Sorry, meant to say half the cores of the 680 in prior statement. Again, this new design is not meant for crunching, and all boards are effectively "1" off, so 660ti = 550ti BOARD.

Sorry for the typo.

P.S. Hopefully next week Gianni?

Munkhtur
Send message
Joined: 13 Nov 10
Posts: 3
Credit: 105,044,879
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 24450 - Posted: 17 Apr 2012 | 9:50:05 UTC - in response to Message 24234.
Last modified: 17 Apr 2012 | 9:53:06 UTC

my gtx680 didnt compute any work from GPUGrid
i tested it on S@H, it works without problem

so i bought it from US, then send it to Mongolia,
and Mongolia to Korea

fml

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24451 - Posted: 17 Apr 2012 | 10:51:32 UTC - in response to Message 24450.

my gtx680 didnt compute any work from GPUGrid
i tested it on S@H, it works without problem

Please be patient. The current GPUGrid application doesn't support the GTX 680. A new version is under construction, it will support the GTX 680.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24457 - Posted: 17 Apr 2012 | 20:29:18 UTC - in response to Message 24451.

Compared to the current production application running on a gtx580, the new app is 17% faster on the same GTX580 and 50% faster on a Gtx680.

It will come out first as a beta and stay as a separate application for now. We will try to get it out quickly as it makes a big difference.

It should come out within this week for Linux and Windows.

gdf

Profile Stoneageman
Avatar
Send message
Joined: 25 May 09
Posts: 224
Credit: 34,057,224,498
RAC: 190
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24458 - Posted: 17 Apr 2012 | 20:55:33 UTC

Want NOW

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24459 - Posted: 17 Apr 2012 | 21:19:13 UTC
Last modified: 17 Apr 2012 | 21:23:10 UTC

50%!!!!!!!!!!!!!! WOW!!!!!! Great work guys!!!! Waiting "patiently"..... :)

Profile has been changed to accept betas only for that rig. Again, 50%!!!!!!!!

Sorry Einstein, but your apps have NOTHING on this jump in performance!! And that doesn't even account for performance/watt. My EVGA step up position in queue better increase faster!!! My 570 is still at #501!!!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24460 - Posted: 17 Apr 2012 | 22:06:11 UTC - in response to Message 24459.
Last modified: 17 Apr 2012 | 22:15:40 UTC

Compared to the current production application running on a gtx580, the new app is 17% faster on the same GTX580 and 50% faster on a Gtx680.

I don't think that means the GTX680 is 50% faster than a GTX580!
I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app.
That would make the GTX680 ~28% faster than a GTX580 on the new app.
In terms of performance per Watt that would push it to ~160% compared to the GTX580, or twice the performance per Watt of a GTX480 ;)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24462 - Posted: 17 Apr 2012 | 22:26:42 UTC - in response to Message 24460.

just to give some numbers for clarity:
production app on gtx580 98ns/day
new app on gtx 580 115 ns/day
new app on gtx 680 150 ns/day

gdf

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24463 - Posted: 17 Apr 2012 | 22:47:38 UTC

Lol. I knew that...... Either way, a lot faster than my 570 that's currently attached! And 160% more efficient is amazing. Again, great work guys!!! Still need to find a mobo for ivy that supports 3.0 at 2 x16.

Butuz
Send message
Joined: 13 Sep 10
Posts: 5
Credit: 17,517,835
RAC: 0
Level
Pro
Scientific publications
watwat
Message 24464 - Posted: 17 Apr 2012 | 23:17:31 UTC

Wow that is excellent news. I had been waiting for news on this before comitting to my GFX card purchase. You guys rock. Your dedication and speed of reaction is outstanding!

Will the new app also give speed boosts on GTX 570 and 560 cards?

Cheers

Butuz
Send message
Joined: 13 Sep 10
Posts: 5
Credit: 17,517,835
RAC: 0
Level
Pro
Scientific publications
watwat
Message 24465 - Posted: 17 Apr 2012 | 23:24:45 UTC

Also I have one more question to those in the know. If I run a GTX680 on a PCIE2 motherboard will it take a performance hit on that 150% figure? Could this be tested if you have time GDF - I know its not a high priority but may help people like me who dont have a next gen motherboard make an informed decision.

Cheers

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24468 - Posted: 18 Apr 2012 | 3:08:21 UTC

All I can say is on the note about the performance hit is, I'm going to THINK that it won't, PCI 3.0 allows for 16 GB/s in each direction. For what we do, this is A LOT of bandwidth. From the results that I've seen, which are based on games, the performance increase seems to be only 5-7%, if this is the case, I would ASSUME that there wouldn't be that big of a performance hit.

The only reason that I want a PCI 3 mobo, which can run 2 cards at x16 each, is because i play games, well one, and two; because it's just a mental thing for me (meaning running at full capacity) even if it's not noticed. I also don't plan on building another rig for some time, and I would like this one to be top notch ;).

It will MOST LIKELY only make a difference to those who run either a) huge monitors or b) multiple monitors using NVIDIA Surround, which I plan on doing with a 3+1 monitor setup.

Think of it like this, even the biggest tasks for GPUgrid only use a lil over a GB if i'm not mistaken (in memory), the need for 16GB/s is way overpowered I would imagine. I'll let you know how my 680 runs once the beta is out (it's on a PCI 2.0 mobo currently)

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24475 - Posted: 18 Apr 2012 | 7:51:21 UTC - in response to Message 24465.

I don't know if PCI3 will make a little change or not. We are trying on a PCI3 motherboard.
The fact that now the PCI controller is inside the CPU, might make some difference in lower latency.

gdf

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24478 - Posted: 18 Apr 2012 | 10:48:22 UTC - in response to Message 24475.

This post, in this thread, discusses speculatively PCIE3 vs PCIE2.
Basically, for a single card it's probably not worth the investment, for two cards it depends on what you want from the system, and for 3 or 4 it's worth it.
As you are looking to get a new system it may be worth it. Obviously we won't know for sure until someone posts actual results for both PCIE2 and 3 setups and multiple cards.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24482 - Posted: 19 Apr 2012 | 0:05:27 UTC
Last modified: 19 Apr 2012 | 0:43:28 UTC

In regards to the PCI 3.0 running 2x16. If I am reading this correctly, am I gonna be "forced" to get a SB-E now, and would most likely get the 3930K, since 3820 isn't unlocked, and that's what I would prefer? http://www.anandtech.com/show/4830/intels-ivy-bridge-architecture-exposed

Further, since IB-E won't be released until MAYBE Q3-Q4, probably towards christmas would be my guess, and won't really offer any benefit besides a die shrink.

I guess this explains why I was having a hard time finding a PCI 3.0 2x16 mobo. Wow, my idea of 100% GPU functionality just increased the price by about another $250. Hmmmm...

Oh, and I found this on andantech (though it's for AMD GPU)

Simply enabling PCIe 3.0 on our EVGA X79 SLI motherboard (EVGA provided us with a BIOS that allowed us to toggle PCIe 3.0 mode on/off) resulted in a 9% increase in performance on the Radeon HD 7970. This tells us two things: 1) You can indeed get PCIe 3.0 working on SNB-E/X79, at least with a Radeon HD 7970, and 2) PCIe 3.0 will likely be useful for GPU compute applications, although not so much for gaming anytime soon.

Doesn't list what they ran or any specs though

EDIT. Well it appears the 3820 can OC to 4.3 which would be most for what I need. Wouldn't mind having a 6 core though. 4 extra threads for WUs would be nice but not mandatory. At $250 at MicroCenter, quite a nice deal.

Profile Carlesa25
Avatar
Send message
Joined: 13 Nov 10
Posts: 328
Credit: 72,619,453
RAC: 206
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24485 - Posted: 19 Apr 2012 | 14:08:12 UTC
Last modified: 19 Apr 2012 | 14:12:38 UTC

Hi, An interesting comparison NVIDIA vs AMD; GTX680/580 versus HD6970/7970.

Makes it quite clear the poor performance of GTX680 in FP64 and best performance in the HD6970/7070 MilkyWay. Greetings.

http://muropaketti.com/artikkelit/naytonohjaimet/gpgpu-suorituskyky-amd-vs-nvidia

Less is better


5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24487 - Posted: 19 Apr 2012 | 15:11:13 UTC
Last modified: 19 Apr 2012 | 15:15:59 UTC

Glad we do FP32

Further the 680 has only 8 FP64 cores, which arent included in its core count, which run at full speed compared to the reduced speed of previous generations.

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24525 - Posted: 22 Apr 2012 | 2:30:57 UTC - in response to Message 24482.

EDIT. Well it appears the 3820 can OC to 4.3 which would be most for what I need. Wouldn't mind having a 6 core though. 4 extra threads for WUs would be nice but not mandatory. At $250 at MicroCenter, quite a nice deal.

I've been looking at the 3820 myself. In my opinion, that is the only SB-E to get. Techspot got the 3820 up to 4.625 GHz, and at that speed, it performs pretty much equally as well as a 3960K at 4.4 GHz. To me, it's a no-brainer - $1000 3960K, $600 3930K, or $250 3820 that performs as well as the $1K chip. According to the Microcenter web site, that price is in-store only.

Where SB-E will really excel is in applications that are memory intensive, such as FEA and solid modelling - which is a conclusion that I came to as a result of the Techspot review - that tested the 3820 in a real-world usage scenario of SolidWorks.

Anyway, IB is releasing on Monday, and it might be worth the wait. Personally, I do not think IB will beat SB-E in memory intensive applications, however, I'll be looking very closely at the IB reviews.
____________

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24526 - Posted: 22 Apr 2012 | 2:49:02 UTC

IB is not going to beat ANY SB-E. It's slight performance improvement and energy savings may very well be negated by its ability to OC less than Sandy (what I've read anyways).

The real advantage of Ivy will come from it's PCIE 3 support, but since SB-E already has this, plus with the ability to natively support 40 lanes instead of 1155 CPUs 16 is my MAIN reason.

We power users probably won't see improvement, if any, until Haswell? is released, the next "tock" phase.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24563 - Posted: 23 Apr 2012 | 19:16:06 UTC
Last modified: 23 Apr 2012 | 19:16:39 UTC

I know patience is a virtue, and I REALLY hate to ask GDF, but........... how's the progress on the beta app coming.

Truly itching to bring my 680 over to you guys. :)

As always, you guys do a great job, and I can't wait to hear about how the experiment with the cystallographers works out!!

Keep it up!!

Evil Penguin
Avatar
Send message
Joined: 15 Jan 10
Posts: 42
Credit: 18,255,462
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 24574 - Posted: 24 Apr 2012 | 7:03:42 UTC - in response to Message 24563.

Better than the ATi version, probably.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24586 - Posted: 25 Apr 2012 | 17:51:11 UTC - in response to Message 24574.

Sorry guys, big changes over here in the lab and we are a bit busy, so I could not find the time to upload the new application.

One of the changes is my machine. First we were compiling on Fedora10, now we will be compiling on Fedora14. If you have an earlier release it could be a problem.

Also, I am having problems with the driver for the GTX680 on Linux.

gdf

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24587 - Posted: 25 Apr 2012 | 19:39:50 UTC

Thanks for the update, always appreciated.

I've read the Linux drivers are pretty shotty (bad) as well. Windows aren't too bad, but are still not great unfortunately.

Wish you the best of luck, know you guys want to get it out

One question when you get the time, if I'm correct this app will be 4.2 but in another thread you ( or one of you) mentioned cuda 5. Any big changes that will effect this project down the road?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24588 - Posted: 26 Apr 2012 | 7:31:45 UTC - in response to Message 24587.

This will be cuda4.2. If I mentioned cuda5 was by mistake.
Later on we will also probably drop cuda3.1 in favor of cuda4 to make sure that people don't need the latest driver version.

gdf

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24605 - Posted: 28 Apr 2012 | 3:13:16 UTC - in response to Message 24588.

This will be cuda4.2. If I mentioned cuda5 was by mistake.
Later on we will also probably drop cuda3.1 in favor of cuda4 to make sure that people don't need the latest driver version.

gdf

If cuda3.1 is dropped, will this affect those of us with older cards, such as an 8800 GT and a GTX 460?

Thanks.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24619 - Posted: 28 Apr 2012 | 16:55:01 UTC - in response to Message 24605.
Last modified: 28 Apr 2012 | 17:02:49 UTC

CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute?
I think these and other CC1.1 cards are overdue for retirement from this project, and I suspect that CUDA4.2 tasks that run on CC1.1 cards will perform worse than they do now, increasing the probability for retirement. While CC1.1 cards will perform less well, Fermi and Kepler cards will perform significantly better.

There isn't much on CUDA4.2, but CUDA4.1 requires 286.19 on Win and 285.05.33 on Linux. I think support arrived with the non-recommended 295.x drivers; on one of my GTX470's (295) Boinc says it supports CUDA 4.2, the other (Linux 280.13) says 4.0.

For CUDA 4.2 development, NVidia presently recommends the 301.32 Dev drivers for Windows and 295.41 drivers for Linux, and 4.2.9 toolkit - Fedora14_x64, for example.

I would expect the high end GTX200 series cards (CC1.3) will still be supported by GPUGrid, but I don't know what the performance would be and it's not my decision. I would also expect support for CC1.1 cards to be dropped, but we will have to wait and see.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24636 - Posted: 29 Apr 2012 | 14:50:07 UTC

Someone's running betas.......

Butuz
Send message
Joined: 13 Sep 10
Posts: 5
Credit: 17,517,835
RAC: 0
Level
Pro
Scientific publications
watwat
Message 24637 - Posted: 29 Apr 2012 | 14:56:12 UTC

http://www.gpugrid.net/show_host_detail.php?hostid=108890

:-)

Butuz

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24638 - Posted: 29 Apr 2012 | 15:47:05 UTC
Last modified: 29 Apr 2012 | 15:51:35 UTC

Not entirely sure why you posted that individuals user id, App page still says nothing new is out for beta testing. Hoping this means they finally got their linux drivers working properly and are finally testing in house.

Maybe tomorrow?

EDIT: Tried to grab some on Windows, and still none available. Someone is definately grabbing and returning results though.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24656 - Posted: 30 Apr 2012 | 17:44:08 UTC - in response to Message 24460.

I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app.
That would make the GTX680 ~28% faster than a GTX580 on the new app.

new app on gtx 580 115 ns/day
new app on gtx 680 150 ns/day

Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true.
150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031)
All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment.
It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24659 - Posted: 30 Apr 2012 | 18:00:09 UTC
Last modified: 30 Apr 2012 | 18:35:59 UTC

I would like to know.what limits performance as.well, but.the shader clock speed is.actually lower. Remember you have to.double others clock to get shader clock, so 680 =1.1 ghz on boost while 580 stock 772*2 for shader clock. Also more efficient. Running a 3820 @ 4.3 on wcg, and Einstein with gpu at 80% utilization. This system currently only uses 300 W

Butuz
Send message
Joined: 13 Sep 10
Posts: 5
Credit: 17,517,835
RAC: 0
Level
Pro
Scientific publications
watwat
Message 24663 - Posted: 1 May 2012 | 0:14:41 UTC - in response to Message 24656.

Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true.
150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031)
All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment.
It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future.


I think you are wrong. You are looking it at totally the wrong way, concentrating on the negatives rather than the positives.

1. The card is purposefully designed not to excel at compute applications. This is a design goal for NVidia. They designed it to play games NOT crunch numbers. 95 % of people buy these cards to play games. The fact that there is any improvement at all over the 5xx series cards in GPUGRID is a TOTAL BONUS for us - and I think testament to the hard work of the GPUGRID developers and testers rather than anything else NVidia have done.

2. It looks like we are going to get a 30.4% performance increase at GPUGRID and at the same time a 47% drop in power usage (and thus a drop in heat and noise) on a card that is purposefully designed to be awful at scientific computing. And you are not happy with that?

I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used.

My 2p anyway.

Butuz

Profile Zydor
Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 24664 - Posted: 1 May 2012 | 0:36:49 UTC - in response to Message 24663.
Last modified: 1 May 2012 | 0:48:50 UTC

I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used.


For $1000 a card, I would expect to see a very significant increase, boardering on, if not actually massive - no luck about it. The power reduction comes with the territory for 28nm, so thats out of the equation. What is left on the compute side is a 30% improvement achieved by the 30% improvement on the GPU clocks.

From a Compute angle is it worth dropping £1000 on a card that - essentially - has only increased its clocks compared to the 580? I very much doubt it. In any case NVidia supply of 28nm is barely adequate at best, so a high priced 690 goes along with that, and its likely to stay that way for a good while until 28nm supply improves.

There is little doubt that they have produced a winner for gaming, its a beast for sure, and is going to "win" this Round. I doubt though that there will be many gamers, even the hard core "I just want the fastest" players, who will drop the money for this. $1000 is a step too far, and I believe will over time result in a real push back on price - its way too much when the mid range cards will nail any game going, let alone in SLI.

Fingers crossed the Project team can pull the cat out of the bag as far as GPUGRID is concerned - but its not looking great at present - at least not for $1000 it isnt.

Regards
Zy

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24665 - Posted: 1 May 2012 | 0:53:24 UTC
Last modified: 1 May 2012 | 1:09:23 UTC

The only "issue" I have with the new series, is that it will be on boost 100% of the time, with no way to change it. The card uses 1.175 v and runs at 1105 Mhz in boost (specific to each card) with the amount of stress we put these things through, and that Maxwell will not be out til 2014 I actually paid EVGA $25 to.extend the 3 year to 5. Plan on having these at LEAST til 2015, since i will have both cards be 600 series, bought one and step uped a 570. Whenever Maxwell or 7xx series comes out ill buy more, but these will be in one system or another for quite some time. Even though temps at 80% utilization are 48-50, I'm not taking any chances with that high of a voltage 24/7/365

EDIT why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580. And the 680 is 500, the 690 is $1000

EDIT AGAIN: If you already own say 5 580's or whatever, AND live in a place with high electricity, considering used cards can still get roughly $250, you MAY actually be able to recover the costs of electricity alone, let alone the increased throughput. AGAIN, the SHADER CLOCK is 39.% SLOWER, not faster. 1.1Ghz Shader on 680 vs 1.544Ghz on the 580 (core 2x). CORE clock is irrelevant to us. Am I missing something here?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24666 - Posted: 1 May 2012 | 10:18:00 UTC - in response to Message 24665.

With the Fermi series' the shaders were twice as fast as the GPU core. I guess people presume this is still the case with Kepler.

It's possible that some software will turn up that enables you to turn turbo off, though I expect many would want it to stay on.
Can the Voltage not be lowered using MSI Afterburner or similar?
1.175v seems way too high to me; my GTX470 @ 680MHz is sitting at 1.025v (73degC at 98% GPU load).

I think the scientific research methods would need to change in order to increase utilization of the shaders. I'm not sure that is feasible, or meritable.
While it would demonstrate adaptability by the group, it might not increase scientific accuracy, or might require so much effort that it proves to be too much of a distraction. It might not even work, or could be counterproductive. Given that these cards are going to be the mainstream GPU's for the next couple of years, a methodology rethink might be worth investigating.

Not having a Kepler or tasks for one, I could only speculate on where the calculations are taking place. It might be the case that a bit more is now done on the GPU core and somewhat less on the shaders.

Anyway, it's up to the developers and researchers to get as much out of the card as they can. It's certainly in their interests.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24667 - Posted: 1 May 2012 | 10:36:43 UTC - in response to Message 24665.

why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580.
....
AGAIN, the SHADER CLOCK is 39.% SLOWER, not faster. 1.1Ghz Shader on 680 vs 1.544Ghz on the 580 (core 2x). CORE clock is irrelevant to us. Am I missing something here?

As a consequence of the architectural changes (say improvements), the new shaders in the Kepler chip can do the same amount of work as the shaders in the Fermi at doubled core clock. That's why the Kepler can be more power efficient than the Fermi. (And because the 28nm lithography of course)

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24668 - Posted: 1 May 2012 | 12:43:34 UTC

No, voltage monitor does not efffect this card whatsoever. Some say you can limit it by limiting the power usage monitor, but since we put a different kind of load on the chip, at 80% utilization , mine is on boost with only 60% power load. I've tried offseting down to base clock (-110) but voltage was still at 1.175.

It bothers me a lot too, I mean my temps are around 50, but as i said before this is why I paid EVGA another $25 to extend the warranty to 5 years. If it does eventually bust, wouldn't be my fault.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24669 - Posted: 1 May 2012 | 17:08:35 UTC - in response to Message 24668.

Perhaps in a month or so EVGA Precision will include an update to allow you to change the Voltage or release a separate tool that does.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Rangers
Avatar
Send message
Joined: 5 Jan 12
Posts: 117
Credit: 77,256,014
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 24670 - Posted: 1 May 2012 | 19:01:17 UTC - in response to Message 24669.

just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not?

Profile Zydor
Send message
Joined: 8 Feb 09
Posts: 252
Credit: 1,309,451
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 24671 - Posted: 1 May 2012 | 19:25:45 UTC - in response to Message 24670.
Last modified: 1 May 2012 | 19:31:33 UTC

..... should I be waiting for a 600 or not?


Thats a $64,000 question :)

Its built as a gamers card, not a Compute card, and thats the big change from previous NVidia iterations, where previously comparable gaming and compute performance increases were almost a given - not on this one, nor - it seems likely - on the 690. The card also has abysmal to appauling double precision capability, and whilst thats not required here, it does cut off some BOINC projects.

If its gaming, its almost a no brainer if you are prepared to suck up the high price, its a gaming winner for sure.

If its Compute useage, there hangs the question mark. It seems unlikely that it will perform well in a comparitive sense to older offerings given the asking price, and the fact that the architecture does not lend itself to Compute applications. The Project Team have been beavering away to see what they can come up with. The 580 was built on 40nm, the 680 is built on 28nm but early indications only indicate a 50% increase over the 580 - that like for like, given the 40nm to 28nm switch, indicates the design change and concentration on gaming not Compute.

Dont take it all as doom and gloom, but approach the 680/690 Compute with healthy caution until real world testing comes out so your expectations can be tested, and the real world result compared with what you want.

Not a straight answer because its new territory - an NVidia card built for gaming that appears to "ignore" Compute. Personally I am waiting to see the Project Team's results, because if these guys cant get it to deliver Compute to a decent level thats comensurate with the asking price and change from 40nm to 28nm, no one can. Suggest you wait for the test and development results from the Project Team, then decide.

Regards
Zy

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24672 - Posted: 1 May 2012 | 20:30:30 UTC

Don't know if this is relevant for what we do, but someone just posted this on NVIDIA forums:

It seems that integer multiply-add (IMAD) on GTX 680 runs 6x slower than in single precision floating point (FFMA). Apparently, only 32 cores out of 192 on each SM can do it.

A power user for Berkeley wrote this, AGAIN DON'T KNOW IF IT"S CORRECT OR RELEVANT FOR WHAT WE DO, BUT CONSIDERING THE TOPIC ABOUT COMPUTE CAPABILITIES, FIGURED I WOULD POST IT.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24673 - Posted: 1 May 2012 | 21:43:43 UTC - in response to Message 24672.
Last modified: 1 May 2012 | 21:44:27 UTC

I wish the latest Intel processors were only 50% faster!

If it's faster here then it's a good card for here.
For each GPU project different cards perform differently. AMD chose to keep their excellent level of FP64 in their top (enthusiast) cards (HD 7970 and 7950), but dropped FP64 to really poor levels in their mid and range cards (HD 7870, 7850, 7770 and 7750; all 1/16th).

It's not actually a new thing from NVidia; the CC2.1 cards reduced their FP64 compared to the CC2.0 cards (trimmed the fat), making for relatively good & affordable gaming cards, and they were popular.
I consider the GTX680 more of an update from these CC2.1 cards than the CC2.0 cards. We know there will be a full-fat card along at some stage. It made sense to concentrate on the gaming cards - that's were the money's at. Also, NVidia have some catching up to do in order to compete with the AMD's big FP64 cards.
NVidia's strategy is working well.

By the way, the GTX690 offers excellent performance per Watt compared to the GTX680, which offers great performance to begin with. The GTX690 should be ~18% more efficient.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Rangers
Avatar
Send message
Joined: 5 Jan 12
Posts: 117
Credit: 77,256,014
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwat
Message 24675 - Posted: 1 May 2012 | 23:05:54 UTC - in response to Message 24673.

well 50% increase in compute speed sounds good to me, especially since nvidia had, not sure if they still do, 620 driver link on there site as someone here noted. but if it comes down to it i guess a new 570 probably wont be a bad deal.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 155,858
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24677 - Posted: 2 May 2012 | 1:14:48 UTC - in response to Message 24673.
Last modified: 2 May 2012 | 1:17:03 UTC

I wish the latest Intel processors were only 50% faster!

If it's faster here then it's a good card for here.
For each GPU project different cards perform differently. AMD chose to keep their excellent level of FP64 in their top (enthusiast) cards (HD 7970 and 7950), but dropped FP64 to really poor levels in their mid and range cards (HD 7870, 7850, 7770 and 7750; all 1/16th).

It's not actually a new thing from NVidia; the CC2.1 cards reduced their FP64 compared to the CC2.0 cards (trimmed the fat), making for relatively good & affordable gaming cards, and they were popular.
I consider the GTX680 more of an update from these CC2.1 cards than the CC2.0 cards. We know there will be a full-fat card along at some stage. It made sense to concentrate on the gaming cards - that's were the money's at. Also, NVidia have some catching up to do in order to compete with the AMD's big FP64 cards.
NVidia's strategy is working well.

By the way, the GTX690 offers excellent performance per Watt compared to the GTX680, which offers great performance to begin with. The GTX690 should be ~18% more efficient.


Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24678 - Posted: 2 May 2012 | 3:36:51 UTC

I posted a question in regards to GPU Boost on NVIDIA's forums, and the high voltage given to the card (1.175) and my concerns about this running 24/7, and asking (pleading) that we should be allowed to turn Boost off.

An Admins response:

Hi 5pot,

I can understand about being concerned for the wellbeing of your hardware, but in this case it is unwarranted. :) Previous GPUs used fixed clocks and voltages and these were fully guaranteed and warrantied. GPU Boost has the same guarantee and warranty, to the terms of your GPU manufacturer's warranty. :thumbup: The graphics clock speed and voltage set by GPU Boost is determined by real-time monitoring of the GPU core and it won't create a situation that is harmful for your GPU.

Amorphous@NVIDIA

Figured I would share this information with everyone else here

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24686 - Posted: 3 May 2012 | 13:20:30 UTC - in response to Message 24677.

Hi Robert,
At present there is nothing below a GTX680, but there will be.
GeForce GT 630 and GT 640 cards will come from NVidia in the next few months.
Although I don't know how they will perform, I expect these GK107 cards will work here. These will be 50/75W cards, but when running tasks should only use ~75% of that (38/56W).

It's probably best to avoid the GF114 and GF116 GF600 cards for now (40nm). These are just re-branded GF500 cards (with Fermi rather than Kepler designs).

We should also see a GTX670, GTX660 Ti, GTX660 and probably a GTX650 Ti (or similar) within a few months. I think the GTX670 is expected ~ 10th May.

My guess is that a GTX670 would have a TDP of ~170W/175W and therefore actually use ~130W. There is likely to be at least one card with a TDP of no more than 150W (only one 6-pin PCIE power connector required). Such a card would actually use ~112W when running tasks.

I think these might actually be favorable compared to their CC2.1 GF500 predecessors, but we will have to wait and see.

Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards.


____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

frankhagen
Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24687 - Posted: 3 May 2012 | 14:40:15 UTC - in response to Message 24686.
Last modified: 3 May 2012 | 14:41:43 UTC

Hi Robert,
At present there is nothing below a GTX680, but there will be.
GeForce GT 630 and GT 640 cards will come from NVidia in the next few months.


you will defintely have to have a close look what you get there:


http://www.geforce.com/hardware/desktop-gpus/geforce-gt-640-oem/specifications

6 different versions under the same label!

mixed up, mungled up, fraudulent - at least potentially. :(

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 155,858
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24688 - Posted: 3 May 2012 | 16:34:35 UTC - in response to Message 24687.

Hi Robert,
At present there is nothing below a GTX680, but there will be.
GeForce GT 630 and GT 640 cards will come from NVidia in the next few months.


you will defintely have to have a close look what you get there:


http://www.geforce.com/hardware/desktop-gpus/geforce-gt-640-oem/specifications

6 different versions under the same label!

mixed up, mungled up, fraudulent - at least potentially. :(


I see only three versions there, but definitely mixed up.

However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450. I may have to look at that one some more, while waiting for the GPUGRID software to be updated enough to tell if the results make it worth upgrading.

frankhagen
Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24689 - Posted: 3 May 2012 | 16:39:50 UTC - in response to Message 24688.
Last modified: 3 May 2012 | 16:40:32 UTC

I see only three versions there, but definitely mixed up.


take a closer look!

it's 2 kepler's and 1 fermi.

it's 1 or 2 - or 1,5 of 3GB or ram.

+ DDR3 vs. DDR5.

and that's only the suggested specs - OEM's are free to do whatever they want according to clock-rates..

However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450.


if you want a rebranded GTX560se..

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 727,920,933
RAC: 155,858
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24690 - Posted: 3 May 2012 | 17:09:49 UTC - in response to Message 24689.

I see only three versions there, but definitely mixed up.


take a closer look!

it's 2 kepler's and 1 fermi.

it's 1 or 2 - or 1,5 of 3GB or ram.

+ DDR3 vs. DDR5.

and that's only the suggested specs - OEM's are free to do whatever they want according to clock-rates..

I see what you mean about RAM sizes.

However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450.


if you want a rebranded GTX560se..

I see nothing about it that says Fermi or Kepler. But if that's correct, I'll probably wait longer before replacing the GTS 450, but check if one of the Kepler GT 640 versions are a good replacement for the GT 440 in my other desktop.

frankhagen
Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24691 - Posted: 3 May 2012 | 17:16:37 UTC - in response to Message 24690.
Last modified: 3 May 2012 | 17:18:46 UTC

I see nothing about it that says Fermi or Kepler. But if that's correct, I'll probably wait longer before replacing the GTS 450, but check if one of the Kepler GT 640 versions are a good replacement for the GT 440 in my other desktop.


look there:
http://en.wikipedia.org/wiki/GeForce_600_Series

probably best for you to wait for a gt?-650 to show up..

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24692 - Posted: 3 May 2012 | 19:19:24 UTC - in response to Message 24687.
Last modified: 3 May 2012 | 19:23:00 UTC

These have already been released as OEM cards. Doesn't mean you can get them yet, and I would still expect retail versions to turn up, but exactly when I don’t know.
Anything that is PCIE2 probably has a 40nm Fermi design. Anything PCIE3 should be Kepler.

GeForce GT 600 OEM list
:
GT 645 (GF114, Not Kepler, 40nm, 288 shaders) – should work as an entry level/mid-range card for GPUGrid
GT 630 (GK107, Kepler, 28nm, 384 shaders) – should work as an entry level card for GPUGrid
GT 620 (GF119, Not Kepler, 40nm, 48 shaders) – too slow for GPUGrid
605 (GF119, Not Kepler, 40nm, 48 shaders) – too slow for GPUGrid

GT 640 – 3 or 6 types:
GK107 (Kepler), 28nm, PCIE3, 384shaders, 950MHz, 1GB or 2GB, GDDR5, 729GFlops, 75W TDP
GK107 (Kepler), 28nm, PCIE3, 384shaders, 797MHz, 1GB or 2GB, DDR3, 612GFlops, 50W TDP
GF116 (Fermi), 40nm, PCIE2, 144shaders, 720MHz, 1.5GB or 3GB, DDR3, 414GFlops, 75W TDP

Although these are untested, the 729GFlops card looks like the best OEM option.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

frankhagen
Send message
Joined: 18 Sep 08
Posts: 65
Credit: 3,037,414
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 24693 - Posted: 3 May 2012 | 19:37:43 UTC - in response to Message 24692.

These have already been released as OEM cards. Doesn't mean you can get them yet, and I would still expect retail versions to turn up, but exactly when I don’t know.
Anything that is PCIE2 probably has a 40nm Fermi design. Anything PCIE3 should be Kepler.


probably that's the clue we will have.

only one thing left on the bright side: the low TDP kepler-version GT640 will most likely show up even fanless.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24694 - Posted: 3 May 2012 | 21:33:49 UTC - in response to Message 24670.

just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not?

This 50% increase is actually around 30%.
The answer depends on what you prefer.
The GTX 680 and mostly the GTX 690 is an expensive card, and they will stay expensive for at least until Xmas. However, considering the running costs, it could be worth the investment in long term.
My personal opinion is that nVidia won't release the BigKepler as a GeForce card, so there is no point in waiting for a better cruncher card from nVidia this time. In a few months we'll see if I was right in this matter. Even if nVidia releases the BigKepler as a GeForce card, its price will be between the price of the GTX 680 and 690.
On the other hand, there will be a lot of cheap Fermi based (CC2.0) cards, either second-hand ones, or some "brand new" from a stuck stockpile, so one could buy 30% less computing power approximately at half (or maybe less) price.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24695 - Posted: 3 May 2012 | 23:26:53 UTC - in response to Message 24694.
Last modified: 3 May 2012 | 23:28:55 UTC

Until the GF600 app gets released there's not much point buying any GF600.

Upgrading to a GF500, on the cheap, seems reasonable (and I've seen a few at reasonable prices), but I expect that when the GTX 670 turns up (launches next week, supposedly) we will see a lot of price drops.

The GTX690 didn't really change anything; firstly there are none, and secondly a $999 card is way beyond most people, so it doesn't affect the prices of other cards. In fact the only thing it really competes against is the GTX680.
I suppose a few people with HD6990's and GTX 590's might upgrade, but not many, and not when they can't get any.

I have a feeling 'full-fat' Kepler might have a fat price tag too. I'm thinking that the Quadro line up will expand to include the amateur video editors as well as the professionals. The old Quadro's were too pricey and most just used the GeForce Fermi cards, but now that the GF600 has put all it's eggs in the gaming basket, there is nothing for video editors. The design of the GTX690 suggests things. The Tesla's might also change. Possibly becoming more University friendly.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Dagorath
Send message
Joined: 16 Mar 11
Posts: 509
Credit: 179,005,236
RAC: 0
Level
Ile
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24705 - Posted: 4 May 2012 | 8:04:17 UTC - in response to Message 24677.

Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards.


The solution is easy, don't vent the hot exhaust from your GPU into the room. Two ways to do that:

1) Get a fan you can mount in the window. If the window is square/rectangular then get a fan with a square/rectangular body as opposed to a round body. Mount the fan in the window then put the computer on a stand high enough to allow the air that blows out of the video card to blow directly into the fan intake. Plug the open space not occupied by the fan with whatever cheap plastic material you can find in a building supply store, a painted piece of 1/4" plywood, kitchen counter covering (arborite) or whatever.

2) I got tired of all the fan noise so I attached a shelf outside the window and put both machines out there. An awning over my window keeps the rain off but you don't have to have an awning, there are other ways to keep the rain off. Sometimes the wind blows snow into the cases in the winter but it just sits there until spring thaw. Sometimes I need to pop a DVD in the tray so I just open the window. I don't use DVD's much anymore so it's not a problem. I screwed both cases to the shelf so they can't be stolen. It never gets much colder than -30 C here and that doesn't seem to bother them. Now I'm finally back to peaceful computing, the way it was before computers needed cooling fans.

wdiz
Send message
Joined: 4 Nov 08
Posts: 20
Credit: 871,871,594
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24707 - Posted: 4 May 2012 | 8:29:47 UTC - in response to Message 24705.

Any news about GpuGRID support for GTX 680 (under linux) ?

Thx

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24780 - Posted: 7 May 2012 | 14:13:35 UTC - in response to Message 24619.

CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute?

At this point, I run the short queue tasks on my 8800 GT. It simply cannot complete long queue tasks in a reasonable time. If tasks in the short queue start taking longer than 24 hours to complete, I will probably retire it from this project.

That said, if CUDA4.2 will bring significant performance improvements to fermi, I'll be looking forward to it.

As to the discussion of what card to buy, I found a new GTX 580 for $370 after rebate. Until I complete my new system, which should be in the next two weeks or so, I have been and will be running it in the machine where the 8800 GT was. It is about 2.5x faster than my GTX 460 on GPUGrid tasks.

As I see it, there are great deals on 580s out there considering that about a year ago, these were the "top end" cards in the $500+ range.

____________

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24782 - Posted: 7 May 2012 | 14:17:51 UTC
Last modified: 7 May 2012 | 14:40:47 UTC

670's are looking to perform AT LEAST at 580 levels if not better, and with a GIANT decrease in power consumption. They come out Thursday.

EDIT: Any chance we could get an update on new app? ETA or how things are moving along? Know you guys are having issues with drivers, but an update would be appreciated.

Thanks, keep up the hard work.

wiyosaya
Send message
Joined: 22 Nov 09
Posts: 114
Credit: 589,114,683
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24785 - Posted: 7 May 2012 | 17:07:57 UTC
Last modified: 7 May 2012 | 17:10:06 UTC

How is DP performance on 670s? Given DP performance on 680s, I would expect that DP performance on the 670 would be worse than the 680.

I know power consumption is not optimal on the 580 compared to the 600 series in most "gamer" reviews that I have seen, however, I chose the 580 since I run a project that requires DP capability. For projects that require DP capability, I would not be surprised if the 580 is more efficient, power consumption wise, than any of the 600 series as the 680's DP benchmarks are a fraction of the 580's DP benchmarks. On the project I run, Milkyway, I am seeing a similar 2.5 - 3x performance gain with the GTX 580 over my GTX 460 .

Unfortunately, anyone considering a GPU has many factors to consider and that only makes the task of choosing a GPU harder and more confusing.

For a GPU dedicated to GPUGrid, a 600 series card may be an optimal choice; however, for anyone running projects that require DP capability, 600 series may be disappointing at best.
____________

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24786 - Posted: 7 May 2012 | 17:32:14 UTC

Agreed. Do not even consider 6xx if your looking for DP

GPUGRID Role account
Send message
Joined: 15 Feb 07
Posts: 134
Credit: 1,349,535,983
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 24793 - Posted: 8 May 2012 | 3:20:39 UTC - in response to Message 24786.



How is DP performance on 670s? Given DP performance on 680s, I would expect that DP performance on the 670 would be worse than the 680.



Wow, you guys are attentive. I missed the appearance of the 670. Must have blinked.
Looks like it'll be 80% the speed of a 680.


Any news about GpuGRID support for GTX 680 (under linux) ?


Coming soon. There were big problems with recent (295.4x) Linux drivers that rather nixed things for us for a while.

MJH

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24794 - Posted: 8 May 2012 | 7:00:54 UTC - in response to Message 24793.
Last modified: 8 May 2012 | 7:01:48 UTC

I might have suggested the release of a Windows app and worry about the Linux app when new drivers turn up, if it wasn't for the fact that NVidia are not supporting WinXP for their GeForce 600 cards.

What's the performance like on Win7?
Is there still a ~15% loss compared to Linux?

It looks like the GTX670 will be released on the 10th May. Hopefully supply will be able to meet demand and prices of both the GTX680 and GTX670 will be competitive. If it turns up at ~£320 the GTX670 is likely to be a card that attracts more crunchers than the GTX680 (presently ~£410), but all this depends on performance. I expect it will perform close to a GTX580, but use less power (~65 or 70% of a GTX580).

I'm in no rush to buy either, saying as an app for either Linux or Windows hasn't been released, and the performance is somewhat speculative.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24815 - Posted: 8 May 2012 | 21:16:59 UTC - in response to Message 24794.

Ok guys,
we are ready and tomorrow can work on it. Today we compiled the latest version of the acemd passing all the tests. The drivers are still quite poor for linux, but it's workable for a beta.

gdf

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24816 - Posted: 8 May 2012 | 22:03:37 UTC

Glad to hear it. Windows as well?

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 24839 - Posted: 9 May 2012 | 18:08:31 UTC - in response to Message 24816.

it's out for linux now.

gdf

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24840 - Posted: 9 May 2012 | 19:24:35 UTC

I'm glad you guys were able to get it out for linux. Know its been hard with the driver issues. Is there a timeframe for a Windows beta app yet? I've got another 680 on the way, and a 670 being purchased soon. Would love to be able to bring them over here.

Thanks

wdiz
Send message
Joined: 4 Nov 08
Posts: 20
Credit: 871,871,594
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24847 - Posted: 9 May 2012 | 21:11:34 UTC

Failed again..

GTX 680 - Drivers 295.49
Archlinux kernel 3.3.5-1-ARCH

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 6,169
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24853 - Posted: 9 May 2012 | 22:07:38 UTC - in response to Message 24847.
Last modified: 9 May 2012 | 22:08:30 UTC

Failed again..

GTX 680 - Drivers 295.49
Archlinux kernel 3.3.5-1-ARCH

The failed workunits are 'ordinary' long tasks, which use the old application, no wonder they failing on your GTX 680.
You should set up your profile to accept only beta work for a separate 'location', and assign your host with the GTX 680 to this 'location'.

wdiz
Send message
Joined: 4 Nov 08
Posts: 20
Credit: 871,871,594
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24865 - Posted: 10 May 2012 | 10:00:25 UTC - in response to Message 24853.

Ok thank you for the information.
So, i did it, but no beta jobs seems to be available :(

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 24882 - Posted: 10 May 2012 | 16:42:10 UTC - in response to Message 24865.

Please use the New beta application for kepler is out thread for Beta testing.
Thanks,
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Message boards : News : Tests on GTX680 will start early next week [testing has started]

//