Advanced search

Message boards : Multicore CPUs : CPU Comparisons - general open discussion

Author Message
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31191 - Posted: 3 Jul 2013 | 17:55:31 UTC

Post any comments, views or bright ideas about CPU crunching here :)
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31195 - Posted: 3 Jul 2013 | 19:43:46 UTC - in response to Message 31191.
Last modified: 3 Jul 2013 | 19:47:11 UTC

I have that brand new Core i7-4770K configuration (4GB 1333MHz Dual Rank, Dual Channel, CL9 RAM; 320GB 7200rpm HDD; no OC yet) I've mentioned in the parent thread before. My experiences with it are very impressive. The installation of Windows 7 x64 was faster than usual. I do a lot of such installations on very different configurations. My last two was i5-3470s (8GB 1600MHz Dual Rank, Dual channel CL9 RAM; 1TB 7200rpm HDD; no OC).
Right now I'm experimenting with Rosetta @ home on it. The aim of my experiments is to figure out how many simultaneous tasks are optimal, and whether the type of RAM used changes the RAC or not? Also to find out if Windows XP x64 could be installed properly on it, as there is no official support for Windows XP x86 and x64.
I had some random system restarts, but a slight increase in the RAM voltage (by 50mV) fixed this issue. I'm planning to change the RAMs to 1600MHz Single Rank, Dual Channel, CL11.
You can check the results of this experimental host here and here. I've never earned over 1000 credits in R@h for a 24h workunit.
I am an Intel "fan", but I've no doubt that AMD CPUs can outperform Intel CPUs in some applications (and vice versa of course). But if you take the energy costs in consideration, the recent Intel 22nm CPUs are better (the question is that how long it takes to save the price difference between Intel and AMD CPUs by the lower energy costs?)

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31196 - Posted: 3 Jul 2013 | 19:45:48 UTC
Last modified: 3 Jul 2013 | 19:46:33 UTC

Well let me start it, as it is mainly my "fault" that this thread existed.

Edit Zoltan beat me, so I am second :)

An i5 has another architecture than an i7 with HT. The Xeon's have yet another architecture and thus AMD processors does also. That must be taken into account when compare, but could be hard in reality.

You cannot compare apples with pears, is something that is what we say in Dutch when things are being compared that are not the same (and can't be compared). It is both fruit and healthy, but that's it.
Same as compare women and men, they are both human but their architecture is significantly different...
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31197 - Posted: 3 Jul 2013 | 19:48:54 UTC - in response to Message 31196.

Same as compare women and men, they are both human but their architecture is significantly different...

Should we start a new thread to discuss that? :D

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31198 - Posted: 3 Jul 2013 | 19:53:16 UTC - in response to Message 31195.
Last modified: 3 Jul 2013 | 19:57:51 UTC

This is off-thread bur how do you get such long WU's? Mine complete in about 10000 seconds for 50-100 credits.
My Rosie are here: http://boinc.bakerlab.org/rosetta/results.php?userid=308421

What is the important difference between single rank and dual rank memory?

I have 12GB in both i7's but have never seen that more than 5.8 was used.
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31199 - Posted: 3 Jul 2013 | 20:17:31 UTC - in response to Message 31198.
Last modified: 3 Jul 2013 | 20:26:13 UTC

This is off-thread bur how do you get such long WU's? Mine complete in about 10000 seconds for 50-100 credits.

You can set the "target CPU runtime" in your Rosetta@home preferences.

What is the important difference between single rank and dual rank memory?

That's what I'm going to find out :)
Single Rank usually comes in the form of single-sided memory module. It has less chips therefore consumes less energy, and it's faster.
A Dual Rank memory is like two single rank memory on the same module (but it can't be dual channel, since the data bus is also common, and there has to be some delay when switching ranks).
Some older motherboards had limited memory rank support, so if you put dual rank modules in all of their slots, the MB didn't recognized all of the memory. There is no such limitation on modern motherboards (as far as I know).
See this video about single vs dual rank, and this video about single vs dual channel.

I have 12GB in both i7's but have never seen that more than 5.8 was used.

That's okay, but what you don't see in task manager is the saturation of the memory bus. I think R@h is a very memory intensive application, and 6 of them (plus a GPUGrid task) could saturate the memory bus to a level where the PC becames unresponsive (and unusable).

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31200 - Posted: 3 Jul 2013 | 20:56:57 UTC - in response to Message 31199.

That's okay, but what you don't see in task manager is the saturation of the memory bus. I think R@h is a very memory intensive application, and 6 of them (plus a GPUGrid task) could saturate the memory bus to a level where the PC became unresponsive (and unusable).

Ah I understand, but my unresponsive system was only the i7 with the GTX660 in it. Now with the GTX285 in it it is very responsive again. But you are right, if we forget HT, the four cores are all in use indeed.

Another system a quad with vista x86, so not even 4Gb memory to use, runs 24/7 GPUGRID and 2 Einstein on CPU (thus not Rosetta), and is very responsive. I use it for mail, typing letters, even watch live TV via stream, make post stickers, I fact I use it the most.

You can set the "target CPU runtime" in your Rosetta@home preferences.

Indeed I have asked about that in the forum there but I never got a very clear explanation. The idea is to set how many times decoys are cycled. I will experiment with it.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31207 - Posted: 4 Jul 2013 | 10:21:04 UTC

Posting here, mainly for full disclosure about myself.

I am not an Intel fanboy, period. I've owned many non-Intel CPUs, starting with an AMD 386sx, moving to an AMD 386DX-40, then to a UMC 486, then at some point to an AMD Athlon XP and directly after that to an AMD Athlon-64 X2.

Sadly for AMD hot-blooded fanboys, after Intel introduced the Nehalem architecture, AMD lost the performance crown! As simple as that, really! For a few years AMD was playing the tune and Intel danced to it, but unfortunately for AMD, things changed. For a few years, AMD set the performance bar and Intel followed far behind. But that finished and AMD had to return to the price game.

The price game is a good thing for us all, of course! With processing power surpassing almost all our everyday needs, what we really want is cheap adequately powerful CPUs. Without AMD, Intel would (as it has done for years in the past) charge gold for baked sand.

So, we love AMD and we need it to keep Intel's prices under control. And AMD has achieved that pretty successfully over the years, forcing Intel to sell a myriad of low to mid level CPUs at low prices, of course higher than AMD's prices, which is a good thing for AMD!

BUT, at least for the time being, the high performance war has been lost for AMD. I really do hope it will manage to come up with an architecture that will surpass Intel and introduce new levels of performance. This will be good for all of us and I will be among the first to buy the AMD chip!
____________

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31208 - Posted: 4 Jul 2013 | 10:51:07 UTC - in response to Message 31196.

An i5 has another architecture than an i7 with HT. The Xeon's have yet another architecture and thus AMD processors does also. That must be taken into account when compare, but could be hard in reality.

You're mostly wrong here. The single most expensive things with logic circuitry are the fabs (the circuit fabrication factories). A fab cannot produce many different types of circuits, certainly not different architectures. Once an architecture change is done, it's done, there's no way back. So, logic circuitry companies, like Intel, AMD, Nvidia, etc, try very hard to do a single thing: ooze the last drop of performance / efficiency / profit from the architecture they are currently producing. Intel's i3s, i5s and i7s, even lower-level and higher-level CPUs, like some Pentium-branded and Xeons, share the same architecture. It's like Nvidia and Kepler and Fermi before that. Wikipedia has detailed information about these things.

You cannot compare apples with pears, is something that is what we say in Dutch when things are being compared that are not the same (and can't be compared). It is both fruit and healthy, but that's it.

AMD and Intel CPUs are different of course, but they seek to do the same thing: run our programs! So, comparisons are not only valid, they are necessary! Comparisons are what drives performance up and prices down!
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31213 - Posted: 4 Jul 2013 | 12:39:33 UTC - in response to Message 31207.
Last modified: 4 Jul 2013 | 12:46:33 UTC

after Intel introduced the Nehalem architecture, AMD lost the performance crown![/i] As simple as that, really! For a few years AMD was playing the tune and Intel danced to it, but unfortunately for AMD, things changed. For a few years, AMD set the performance bar and Intel followed far behind. But that finished and AMD had to return to the price game.

The price game is a good thing for us all, of course! With processing power surpassing almost all our everyday needs, what we really want is cheap adequately powerful CPUs. Without AMD, Intel would (as it has done for years in the past) charge gold for baked sand.

So, we love AMD and we need it to keep Intel's prices under control. And AMD has achieved that pretty successfully over the years, forcing Intel to sell a myriad of low to mid level CPUs at low prices, of course higher than AMD's prices, which is a good thing for AMD!

BUT, at least for the time being, the high performance war has been lost for AMD. I really do hope it will manage to come up with an architecture that will surpass Intel and introduce new levels of performance. This will be good for all of us and I will be among the first to buy the AMD chip!

Initial insults aside I would pretty much agree with your post. I am an AMD fan and for the reasons I stated before: AMD has been the only thing that has kept Intel honest both in performance and pricing. Intel has played dirty pool against competitors throughout its history with it's FUD, anti-competitive practices, rigging benchmarks, giving payouts to PC makers to not use AMD, etc, etc, etc.

Even in the years when AMD processors were far superior to the flawed Intel P4, most of the hardware sites and benchmark companies fell all over themselves to devise "reasons" why the P4 was faster. Intel ruled with FUD, payoffs and lies. It was pretty sickening and one of the reasons I only buy Intel when I have to. Yes, I'm old and have been around the microcomputer industry since it began. I watched it all happen in real time. Flashhawk posted some links of just a couple examples concerning Intel's behavior. This knd of thing hurts us all. Here are the links again:

http://www.agner.org/optimize/blog/read.php?i=49&v=t

http://semiaccurate.com/2011/06/20/nvidia-amd-and-via-quit-bapco-over-sysmark-2012/

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31215 - Posted: 4 Jul 2013 | 17:20:02 UTC - in response to Message 31213.

You're not old, you are experienced! In the early eighties I was nagging for a Commodore 64, but it was to expensive, but I got an ZX Spectrum from my dad. I was the only one at school with a computer first, even the math teachers where jealous. Later I got an America friend (his dad was from the Army and they lived in the Netherlands for a few years). Very cheap I got then an real IBM. Rock solid but slow as later found. There was shop with brand-less PC's and one had a mathematical coprocessor and did Fourier calculations at least 1000 times faster then the 5 times expensive IBM. These mathematical coprocessors are now integrated, for the young readers :)
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31223 - Posted: 4 Jul 2013 | 20:05:24 UTC - in response to Message 31215.

There was shop with brand-less PC's and one had a mathematical coprocessor and did Fourier calculations at least 1000 times faster then the 5 times expensive IBM. These mathematical coprocessors are now integrated, for the young readers :)

It was probably an 8086 based machine with an 8087 co-processor. I souped a few of those up with NEC V30 chips. Used the NEC V20 for the 8088 sockets. Rocket fast for those days :-)

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31249 - Posted: 5 Jul 2013 | 12:02:04 UTC - in response to Message 31213.

Thank you to everyone who started using this thread.

A few general considerations when comparing run times on multicore processors:

If you are looking at someone elses data, making comparisons is a guessing game. You don't know what their CPU clocks are set to, what their RAM is, what else they use the system for or if the system is set to downclock/sleep/LAIM...

For your own data it's still important to take note of the CPU clocks; if the CPU is at stock the clocks may well be different depending on the number of tasks being run. If for example a CPU's clocks are 3.4GHz when 2 cores/threads are running it might be the case that when 8 cores/threads are in use the clocks are 3.1GHz. The clock speed difference is almost 10%

Even if you overclock and fix the clocks so they are static you will get runtime variation due to CPU usage competition and restraints on the CPU from other resources (RAM, drive for heavy I/O projects). This can be clearly seen in climate models for example, where the increase in return rapidly diminishes the more cores you use. When you reach 6 or 7 cores there is virtually no benefit of running a 7th or 8th model. For this reason, and others, it's a good idea to mix and match CPU projects rather than just running one. For WCG crunchers this is the norm and the recommendation, though recent client updates are now preventing this!

To emphasize what Zoltan said in his original post, its about getting the most for your money in the long run. So the purchase cost may or may not be offset by the running costs. This is of course something that varies by region/area. A processor that is best for you may not be the best for someone else's circumstances.

Space is another factor to consider. If you have plenty of space, you can buy more systems. If the electric is cheap, then this is a better option. If however you have almost no space, you might only be able to buy one system, so you might want to go for a higher spec and hope that the outlay will be offset in the long-run against running costs, or at least partially.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31250 - Posted: 5 Jul 2013 | 12:17:00 UTC - in response to Message 31195.
Last modified: 5 Jul 2013 | 12:17:58 UTC

I had some random system restarts, but a slight increase in the RAM voltage (by 50mV) fixed this issue.

It turned out that not the RAM voltage causing these random restarts. The CPU C3/C6 state support was enabled in the BIOS (I forgot to disable them after a BIOS update enabled them). Still crunching 7 threads of Rosetta @ home on this PC. It's CPU is running at 3.7GHz no matter how many threads (2..7) are running at the same time. I'll leave it like this for the weekend (maybe I'll start a 8th thread tomorrow), and I'll change the RAM modules to single rank on Sunday evening.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31265 - Posted: 5 Jul 2013 | 20:18:16 UTC
Last modified: 5 Jul 2013 | 20:18:34 UTC

Regarding the discussion of AMD vs. Intel at Yoyo-ecm in the last thread: while I think Intels are overall the better choice for crunching, depending on your electricity cost and how long you keep your systems, you have to give credit where it's due: the AMDs seem pretty good at ecm. Whatever this may be worth for you.

Taking a quick look at the current top 3 to 5 I see the following

i7-3930K with HT: 17800s for 570 credits
FX8120: 17000s for 570 credits
X6 1035T: 18450s for 593 credits

So while we don't know much about the actual configs and especially clock speeds (except Beyonds X6), it's clear that AMD is very competitive here. And for this it doesn't matter how many people may be running which CPU here in less-than-optimal configs.. what we see among the top hosts is what the hardware can do, given the chance.

@Zoltan: I'd be very interested in an investigation whether Haswell scales better with HT than previous chips. The reason I suspect this: it's got more execution units! They're not terribly relevant to FP crunching, but in mixed workloads they should definitely show. And BOINC is the best real world usage case for mixed workloads.
I know this is not easy to measure, but so far I haven't seen any meaningful data besides the usual benchmarks. I also suspect Haswell will show quite some teeth under heavy server load.. but still no benchies.

@Haswell in general: I think people are looking at this chip in the wrong way, or were simply expecting too much. I think Haswell is great for BOINC, even if you can't use AVX2 yet. Great in the way "a significantly better choice than Ivy, for the same price".

In regular tests Haswell has been a bit shy of 10% faster per clock than Ivy (side note: and people complain that that it consumes a bit more power.. that additional performance has to come from somewhere!). that's not factoring AVX2 in yet, and I'm sure under BOINC we could find projects where the difference is even larger. As there will be some where it's smaller.

But let's just use the average for now. And assume we're running our Haswell at a nice and smooth 4.1 GHz at slightly above 1.00 V (my Ivy can do this). Now the interesting question is: how hard would I have to push an Ivy to get similar performance? I'd say we'll need about 10% higher clocks, since performance scales slightly sub-linear with frequency even under BOINC loads.

Which means we'd have to drive Ivy up to 4.5 GHz to reach the same performance! My chip would need about 0.2 V more for this, requiring ~44% more power from the voltage alone. The added frequency of Ivy would increase its power consumption further by 10% and would probably cancel the higher power draw per clock of Haswell.

So while Haswell can not clock as high as Sandy and Ivy (due to the mobile 22 nm process being optimized for high frequencies at low voltages), consumes more power at a given frequency and thus runs hotter, it makes up for this by giving us either higher performance proportional to the added power consumption, or we could hit the same performance at significantly lower power consumption (by running at lower clocks & voltage than Ivy).

This improvement is surely too small to make you switch from an Ivy, but even Intel wouldn't expect you to do so anyway. However, if you're buying anyway then Haswell is the clear winner for BOINC.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31270 - Posted: 5 Jul 2013 | 22:44:45 UTC - in response to Message 31265.

This improvement is surely too small to make you switch from an Ivy, but even Intel wouldn't expect you to do so anyway. However, if you're buying anyway then Haswell is the clear winner for BOINC.

MrS

The Haswell prices have already dropped in the UK, and in some cases they are cheaper than SB/IB alternatives, which makes them a better choice for a new computer, at least compared against like for like SB and IB systems. In some situations however the older 6core/12thread processors such as the i7-3970X will still be the better choice (PCIE lanes).

One catch is that Haswell needs a new motherboard, so it's not an upgrade path for anyone who opted for an i3/i5 SB for example.
As well as the enthusiast 1366 and 2011 sockets we've recently had LGA1156 and LGA1155 (SB/IB) motherboards, so adding an LGA1150 socket makes you wonder how future proofed you can get with an Intel system.

Another catch is still the purchase price:
FX8350 (8core) £150, FX8320 (8core) £120, FX6300 (6core) £90, FX4130 (4core) £74,
i7-4770K (8thread) £270, i5-4670K (4core/4thread) £190, i5-4430 £146.
For crunching the i7-4770K is likely to be better for most projects than the FX8300's but it still costs £120 to £150 more, and that could get you a mid-range GPU.
The mid-range Haswell's are still expensive and while the i5-4430 might be a very competitive cruncher, there will be some projects where the 8core AMD's win hands down.
There really isn't a lower-end Haswell - there are no i3-4xxx processors, so there is nothing in the £75 to £120 range, at least for now.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31273 - Posted: 5 Jul 2013 | 23:28:26 UTC

Two things what I think. A MOBO for Intel, is almost twice as costly than and AMD MOBO, same specs with a lot of room for GPU, USB 3.0.

Indeed AMD uses more Watt then Intel, but not in my case. I have two Bloomfields, using 130W maximum, the AMD would use 125W. Okay not a win.

The plan was a new rig in the autumn, mainly for crunching, but also general use, with Office, mail, TV via browser etc. So that would be an six core Intel with good quality parts. All the Pentiums, Celerons would go out of the door to make room, as they are not used anymore. Sometimes for testing some crap software of experimenting with Linux (Ubuntu).

But now I have 2 GTX660 and they don't run smooth in the things they are in (however the T7400 not bad). So for around 400 euri I could get all the parts I need, the GPU's are already here.

But it is possible that I can't think good anymore this time and two packets of chips for dinner :)

If someone can make a nice system with Intel for 400 euri, I would like to now. Keep in mind the case must be large (roomy), seems to be the most expensive part.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31280 - Posted: 6 Jul 2013 | 12:21:48 UTC - in response to Message 31273.
Last modified: 6 Jul 2013 | 12:25:05 UTC

If someone can make a nice system with Intel for 400 euri, I would like to now. Keep in mind the case must be large (roomy), seems to be the most expensive part.

There are some good inexpensive cases with excellent airflow. I've used the Antec 300 for years. Nice quality with 3 speed 140mm top fan and 3 speed 120mm rear fan. Add a 120mm side fan and 1 or two 120mm front fans and you're set. There are also some minor variations of the basic 300 case now. Other low cost options are the NZXT Source 120 and the Corsair Carbide 200R, both of which will hold enough fans to made a wind tunnel jealous. Have used dozens of the Antec 300 and am in the process of testing the other two. For all 3 the quality is decent. A super expensive case won't get you much more in practical terms. Definitely prefer cases with the PS on the bottom.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31282 - Posted: 6 Jul 2013 | 12:36:24 UTC

The Bloomfield is a rather old high-end CPU by todays standards. It was a significant step forward back then, but even current AMDs beat its energy efficiency (and I mean in general, not in special AMD-friendly projects).

@SK: sure, the initial purchase price favors AMD. How much this matters actually depends on your local electricity price. I know I'd have to opt for energy efficiency in almost every choice I make around my rig, but in the US electricity is a lot cheaper.

Regarding the mainboards: I don't see much of a difference there. Sure, there are very cheap boards for AMD, but these are.. cheap. The last ones I bought for work don't last very long under sustained load. I'd say about 100€ gets you a fine board for both parties, a bit less if you don't OC.

And finally getting back to the AMD vs Intel topic. Comparing the Richland A10-6700 and A10-6800K something interesting pops up: the 6800K is typically 5% faster in applications but needs about 60% more power to get there. AMD is totally blowing their power budget here by going "full throttle". That's obviously not what we should run for 24/7 crunching.

Using the A10-6700 as basis an scaling up to 4 modules would probably give us around 115 W TDP (same uncore, doubled modules) - which is not far off what we're getting with the slightly higher clocked FX8350. So.. what to do? Scale further! Reduce voltage, maybe reduce frequency to ~3.5 GHz as well and energy efficiency won't be as bad any more. Sub 100 W should be possible without extreme tuning here. We'd loose a bit more performance compared to Intel, but if we choose some project combination where the modules work well (i.e. at least some integer code in there), through and performance/watt should be decent.

Ideally one would schedule things like this: one core of a module crunches some pure integer task (ABC, Collatz, probably ecm, maybe mind modelling) and the other core is assigned a regular FP task. This should use the module architecture in the most efficient way. The only problem is that the integer projects are.. not the most attractive ;)
However, polling a GPU should also be an integer-dominated task! But such BOINC scheduling would require some serious programming effort.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31285 - Posted: 6 Jul 2013 | 13:19:32 UTC

I have browsed a bit and searched only in the Netherlands (where a lot of parts are more expensive than in the US). An i7-4770K (four core), 84W, €317. An i7-3930K (six core), 130W, €530. Nice Xeons with 40W and 80W, but with prices of four digets, so that is no option for me.

Skgiven has a nice idea of getting a cheap i3 or i5 and make a rig for GPU crunching only. But I know myself then I will do more with the system and then it won´t work. Not that I am depended on 1 system. I have taken a lot of old stuff home from work...Won´t do that anymore, unless there is a T7400 :)
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31286 - Posted: 6 Jul 2013 | 13:26:50 UTC - in response to Message 31280.

If someone can make a nice system with Intel for 400 euri, I would like to now. Keep in mind the case must be large (roomy), seems to be the most expensive part.

There are some good inexpensive cases with excellent airflow. I've used the Antec 300 for years. Nice quality with 3 speed 140mm top fan and 3 speed 120mm rear fan. Add a 120mm side fan and 1 or two 120mm front fans and you're set. There are also some minor variations of the basic 300 case now. Other low cost options are the NZXT Source 120 and the Corsair Carbide 200R, both of which will hold enough fans to made a wind tunnel jealous. Have used dozens of the Antec 300 and am in the process of testing the other two. For all 3 the quality is decent. A super expensive case won't get you much more in practical terms. Definitely prefer cases with the PS on the bottom.

Yeah your info would be great as you have build a lot. I never build a system from scratch did a lot of changes, additions, the upgrading bit.
Looked at a lot of cases and want the PSU in the bottom, and one big or two smaller in the front to suck air and one in the back to blow out air, and one or two top fans to blow out air.
Now I see plenty cases with fans in the front, but the front is closed, like the Corsair Obsidian 550D for example. That seems a roomy case, price in Germany acceptable for it. But why is the front closed while it has fans there?
I work in physics but can not explain this yet. There must be side openings for sure, but if that sucks enough air? Its not funneled like a jet engine.
Would be glad for an explanation.
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31287 - Posted: 6 Jul 2013 | 14:12:20 UTC - in response to Message 31285.
Last modified: 6 Jul 2013 | 14:21:19 UTC

I have browsed a bit and searched only in the Netherlands (where a lot of parts are more expensive than in the US). An i7-4770K (four core), 84W, €317. An i7-3930K (six core), 130W, €530.

I don't recommend the i7-3930K, even though it has six cores, it's outdated. If you want to build a high-end cruncher for CPU tasks, you should wait a little more for the release of the 4th generation extreme CPU series. There are going to be models with 8 cores and HT.
If you want to build cruncher PC now, I recommend an i7-4770 or i7-4770K. If this cruncher should have 2 GPUs, I recommend a motherboard with 4 PCIe x16 slots (such as my Gigabyte GA-Z87X-OC). You can put 2 GPUs in a motherboard like that 4 slots apart, so there will be 2 slot space for the airflow when using standard dual slot GPUs, and if you choose some oversized 3 slot wide GPUs, there will be still a slot space between the cards.
Besides being an Intel fan, I prefer ASUS and Gigabyte parts, especially motherboards. I've searched for days for a new motherboard, and I've found that this Gigabyte GA-Z87X-OC is the ideal choice for a dual-GPU cruncher (24/7/365) for the following reasons:
1. price (it's around 180€)
2. PCIe slot arrangement
3. onboard CPU power sulpply effectivity, and long lasting capacitors
4. Thermal conductivity of the motherboard (it has doubled thickness copper in the PCB - you can feel it by its weight)
5. onboard switches and BIOS diagnostic display
This motherboard has one drawback: the PCIe slots share the CPU's 16 PCIe lanes, so in a dual GPU setup both GPUs will run at PCIe x8, which doesn't impair the performance of the current GPUGrid client. However, you can have a motherboard without that limitation for doubled price (e.g. the GA-X58-OC Force)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31289 - Posted: 6 Jul 2013 | 15:00:47 UTC - in response to Message 31287.

I'd consider 180€ far too much for a mainboard, especially if it shall "only" run 2 GPUs. In this case I'd choose 2 standard PCIe slots, so 8x PCIe 3 for plenty of bandwidth. And considering TJs current thermal problems I wouldn't recommend he'd try 4 GPUs ;)

And the Ivy-E for socket are rumored to top out at 6 cores + HT again for "enthusiasts". Xeons will go higher (again), but at insane prices for BOINC. Still I also wouldn't recommend the 3930K as it's a 32 nm Sandy Bridge and henceforth less power efficient. Sure, it's got 50% more cores than Haswell, but clock for clock Haswell should be around 15% faster, which combined with the higher clocks reduces the performance advantage further to <30%.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31290 - Posted: 6 Jul 2013 | 15:09:49 UTC

Four GPU's is not to bad if you take two GTRX690 :-)

Sorry for the Intel fans, but I will build an AMD based one first, with the 2 GRX660. I will keep it cheap as I am not that enthusiastic about there performance, but they are here no and must do GG.
Then with Zoltan's tip in mind, I wait for the 4th generation extreme one's to see how to do and what the prices air. That can become a "super cruncher" with two GTX770, or one 690. And would be something for the winter. First for the money reason and second because summer is start to run and then my attic becomes to warm. In fact it already is.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31292 - Posted: 6 Jul 2013 | 18:06:29 UTC - in response to Message 31286.

There are some good inexpensive cases with excellent airflow. I've used the Antec 300 for years. Nice quality with 3 speed 140mm top fan and 3 speed 120mm rear fan. Add a 120mm side fan and 1 or two 120mm front fans and you're set. There are also some minor variations of the basic 300 case now. Other low cost options are the NZXT Source 120 and the Corsair Carbide 200R, both of which will hold enough fans to made a wind tunnel jealous. Have used dozens of the Antec 300 and am in the process of testing the other two. For all 3 the quality is decent. A super expensive case won't get you much more in practical terms. Definitely prefer cases with the PS on the bottom.

Now I see plenty cases with fans in the front, but the front is closed, like the Corsair Obsidian 550D for example. That seems a roomy case, price in Germany acceptable for it. But why is the front closed while it has fans there?
I work in physics but can not explain this yet. There must be side openings for sure, but if that sucks enough air? Its not funneled like a jet engine. Would be glad for an explanation.

I agree. The Corsair and NZXT above collect ait from the edges of the front panel. That has to be restrictive compared to the Antec 300 which pulls air directly in through the front panel.

As far as the AMD vs Intel question. The 990FX MB you bought has 2 x16 slots, much preferable to a board with 2 x8 or an x16 and x4 slot. I think you made a good choice, I think you'll be happy with the FX-8350 too. Realistically we're not talking much CPU performance difference at all compared to the fastest Intels. People get all excited over 10-15%. Big deal. Those are game benchmarks, not the massively threaded scientific apps we run. Remember too that Intel "owns" the benchmark comapnies and a good share of the "performance" sites, not to mention their dirty compiler tricks. How many CPU BOINC credits is that? Not much. If you use that very significant cost savings to buy faster GPUs you will be so much further ahead with the FX-8350 system you bought, far ahead in total production. Sure you can buy cheap Intel CPUs and MB to get the cost closer, but then the AMD system is faster for CPU too. I used to run more Intel than AMD in the P2 and P3 days but comparing the two, my AMDs always ended up faring better and costing less. As I mentioned I've been building PCs for many years. One thing I've noticed is that I keep having to fix the Intel based systems and the AMDs just keep running and running. I've still got quite a few in businesses chugging along doing their more than decade old accounting and other business chores without me ever having to do more than blow the dust out every few years. Works for me. Again, as I mentioned before what's possibly more cost effective than buying one of the excellent X6 T1045 CPUs for $80 and dropping it into the latest 990FX AM3+ motherboard of choice?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31293 - Posted: 6 Jul 2013 | 18:08:52 UTC - in response to Message 31289.
Last modified: 6 Jul 2013 | 18:12:01 UTC

I'd consider 180€ far too much for a mainboard, especially if it shall "only" run 2 GPUs. In this case I'd choose 2 standard PCIe slots, so 8x PCIe 3 for plenty of bandwidth. And considering TJs current thermal problems I wouldn't recommend he'd try 4 GPUs ;)

The mainboard could support 4 GPUs, besides I wouldn't put a 300€ Core i7-4770K in a cheap mainboard. However this mainboard costs 50€ more than I usually spend on a mainboard, but I thought it's worth the extra cost.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31295 - Posted: 6 Jul 2013 | 19:36:19 UTC - in response to Message 31292.

As far as the AMD vs Intel question. The 990FX MB you bought has 2 x16 slots, much preferable to a board with 2 x8 or an x16 and x4 slot. I think you made a good choice, I think you'll be happy with the FX-8350 too. Realistically we're not talking much CPU performance difference at all compared to the fastest Intels. People get all excited over 10-15%. Big deal. Those are game benchmarks, not the massively threaded scientific apps we run. Remember too that Intel "owns" the benchmark comapnies and a good share of the "performance" sites, not to mention their dirty compiler tricks.

As far as I know these dirty tricks are not present in the latest versions of that compiler. However, it came to my mind, that the Yoyo client could be "optimized" to AMD CPUs, still we don't call this "dirty tricks". NVidia also used "dirty tricks" for better benchmark results. There are many little things which made me not to like AMD, for example their driver support is worse than Intel's.

I used to run more Intel than AMD in the P2 and P3 days but comparing the two, my AMDs always ended up faring better and costing less. As I mentioned I've been building PCs for many years. One thing I've noticed is that I keep having to fix the Intel based systems and the AMDs just keep running and running. I've still got quite a few in businesses chugging along doing their more than decade old accounting and other business chores without me ever having to do more than blow the dust out every few years. Works for me.

It's interesting, because I had such difficulties with AMD configurations at these times (leaking and bloated capacitors, way too loud cooling), so I gave up on AMD. One of our customer had a PIII 733MHz configuration with 64MB SDRAM, running Win'98(!) until recently, when their ERP system stopped working on Win'98 after an update (mainly for security reasons). A local "wiseman" tried to install WinXP on that PC, but he gave up because it was way too slow, but it's still working. They have an Athlon XP based PC (from a different source) which still working, even though it's hot. We sold many P4 2.8GHz PCs with Intel mainboards and they are still working today. I wish they only fail soon, because when this happens, I don't have to explain why should they buy fresh hardware.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31298 - Posted: 6 Jul 2013 | 20:45:39 UTC

The quad with the 550Ti problems, is more then 5 years old. Its a Dell XPS, small, shiny black edition with a rubber mat on top. The model was only short available. I got it due to an action of dell with 20% discount and some free stuff. It came out the box, ran for 4 year 24/7 without problem, then the GPU broke (stripes on screen) and then I bought the GTX550Ti due to it light PSU and could do GG. It worked fine until yesterday when I swapped the 55oTo for the 660 to test.
But 5 year ago there was a lot of complaining about Vista, it was unstable, nod drivers and so on. But I have never had problems.

Around 15 year back there was a "privet PC project", one could use salary before tax and the employer paid also a part. Then I was the only one who bought an AMD, it was way cheaper, and I had the biggest HD and a CD-burner as well and a 18.1 inch flat screen. There was a limit. So by choosing AMD I had the "biggest system" of all participants. But I would get trouble said IT and colleagues. But also this system server me for 10 years (no crunching).

So it is a bit one own experience and likings. I don´t mind to have all sorts of PC´s
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31299 - Posted: 6 Jul 2013 | 20:51:02 UTC - in response to Message 31280.

If someone can make a nice system with Intel for 400 euri, I would like to now. Keep in mind the case must be large (roomy), seems to be the most expensive part.

There are some good inexpensive cases with excellent airflow. I've used the Antec 300 for years. Nice quality with 3 speed 140mm top fan and 3 speed 120mm rear fan. Add a 120mm side fan and 1 or two 120mm front fans and you're set. There are also some minor variations of the basic 300 case now. Other low cost options are the NZXT Source 120 and the Corsair Carbide 200R, both of which will hold enough fans to made a wind tunnel jealous. Have used dozens of the Antec 300 and am in the process of testing the other two. For all 3 the quality is decent. A super expensive case won't get you much more in practical terms. Definitely prefer cases with the PS on the bottom.

Indeed nice systems Beyond, but after careful examination of pictures I saw one thing that I don´t like. I want the PSU in the bottom, but I want holes in the bottom as well because I want its fan facing downwards, not into the case, where warm (lukewarm) air flows towards the MOBO.
Air in from the side will work I guess, but I like one big or two fans behind a grit (with dust filter).
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31300 - Posted: 6 Jul 2013 | 20:53:24 UTC - in response to Message 31295.
Last modified: 6 Jul 2013 | 21:05:56 UTC

As far as the AMD vs Intel question. The 990FX MB you bought has 2 x16 slots, much preferable to a board with 2 x8 or an x16 and x4 slot. I think you made a good choice, I think you'll be happy with the FX-8350 too. Realistically we're not talking much CPU performance difference at all compared to the fastest Intels. People get all excited over 10-15%. Big deal. Those are game benchmarks, not the massively threaded scientific apps we run. Remember too that Intel "owns" the benchmark comapnies and a good share of the "performance" sites, not to mention their dirty compiler tricks.

As far as I know these dirty tricks are not present in the latest versions of that compiler. However, it came to my mind, that the Yoyo client could be "optimized" to AMD CPUs, still we don't call this "dirty tricks". NVidia also used "dirty tricks" for better benchmark results.

Yoyo's ecm client is simply wrapped and why would ecm optimize for AMD when there are far more Intels running the project?

There are many little things which made me not to like AMD, for example their driver support is worse than Intel's.

What drivers? CPU???

I used to run more Intel than AMD in the P2 and P3 days but comparing the two, my AMDs always ended up faring better and costing less. As I mentioned I've been building PCs for many years. One thing I've noticed is that I keep having to fix the Intel based systems and the AMDs just keep running and running. I've still got quite a few in businesses chugging along doing their more than decade old accounting and other business chores without me ever having to do more than blow the dust out every few years. Works for me.

It's interesting, because I had such difficulties with AMD configurations at these times (leaking and bloated capacitors, way too loud cooling), so I gave up on AMD. One of our customer had a PIII 733MHz configuration with 64MB SDRAM, running Win'98(!) until recently, when their ERP system stopped working on Win'98 after an update (mainly for security reasons). A local "wiseman" tried to install WinXP on that PC, but he gave up because it was way too slow, but it's still working. They have an Athlon XP based PC (from a different source) which still working, even though it's hot.

Surely you know that the bad capacitors had nothing to do with CPUs and were equally common on all motherboards, a result of the motherboard companies buying crappy, defective Chinese capacitors. It was a dark period for motherboards, many failed. A friend and the largest cruncher on our team during that period all Intel with ECS MBs (22 PCs). They all failed (Chinese capacitors), he was not happy. Besides my business experiences, I'll relate a personal one from quite a while ago. I had an equal number of regular AMD desktop systems and very expensive large cache Intel servers spread around my house, one of each in 5 rooms to even out the heat distribution. Lightning storm: everything went dark. When the power returned 3 of the expensive Intel servers were dead, 2 MBs and 1 CPU. Within a week the other 2 failed. All of the AMD systems survived and never showed a sign of the catastrophe. Not a large statistical sample but telling. Pretty much the end of my trusting expensive Intel servers...

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31302 - Posted: 6 Jul 2013 | 20:59:33 UTC - in response to Message 31299.

Indeed nice systems Beyond, but after careful examination of pictures I saw one thing that I don´t like. I want the PSU in the bottom, but I want holes in the bottom as well because I want its fan facing downwards, not into the case, where warm (lukewarm) air flows towards the MOBO.
Air in from the side will work I guess, but I like one big or two fans behind a grit (with dust filter).

I always install the PS with the fan on the bottom. There's enough of an airspace. Even if you turned it upside down the fan inside the PS would be sucking in the air and expelling it out the back of the case. Holes in the botton are OK, maybe more airflow but the floor level is also where the dirt/dust is. Take your pick, I've heard both sides argued. Big surprise...

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31305 - Posted: 6 Jul 2013 | 21:14:47 UTC - in response to Message 31302.
Last modified: 6 Jul 2013 | 21:15:05 UTC

Indeed nice systems Beyond, but after careful examination of pictures I saw one thing that I don´t like. I want the PSU in the bottom, but I want holes in the bottom as well because I want its fan facing downwards, not into the case, where warm (lukewarm) air flows towards the MOBO.
Air in from the side will work I guess, but I like one big or two fans behind a grit (with dust filter).

I always install the PS with the fan on the bottom. There's enough of an airspace. Even if you turned it upside down the fan inside the PS would be sucking in the air and expelling it out the back of the case. Holes in the botton are OK, maybe more airflow but the floor level is also where the dirt/dust is. Take your pick, I've heard both sides argued. Big surprise...

I have all systems on a very smooth riser (self made) where dust is easily blown away, it can´t stick. I clean that regularly. Therefor I like the dust filters. But I will see.

...servers spread around my house, one of each in 5 rooms to even out the heat distribution...

Expensive heater. How could you sleep? I also want PC´s on the floor where we sleep, to get my attic cooler, but the misses won´t... to much noise.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31306 - Posted: 6 Jul 2013 | 21:31:14 UTC - in response to Message 31305.

...servers spread around my house, one of each in 5 rooms to even out the heat distribution...
Expensive heater. How could you sleep? I also want PC´s on the floor where we sleep, to get my attic cooler, but the misses won´t... to much noise.

White noise. Sleeping is better, it hides much of the backgound sounds. Not so expensive, my house is very efficient and uses a type of off peak that has a fixed rate. Summer is not as good, as air conditioning doesn't work as well. This is Minnesota though so heating is a MUCH bigger issue than cooling :-)

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31307 - Posted: 6 Jul 2013 | 21:43:58 UTC - in response to Message 31302.

I always install the PS with the fan on the bottom. There's enough of an airspace. Even if you turned it upside down the fan inside the PS would be sucking in the air and expelling it out the back of the case. Holes in the botton are OK, maybe more airflow but the floor level is also where the dirt/dust is. Take your pick, I've heard both sides argued. Big surprise...

After reading it again I understand. Off course to fan in the PSU sucks as well, so it could help as extra cooling faced up.
____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31309 - Posted: 6 Jul 2013 | 22:15:09 UTC

I don't understand the PSU at the bottom trend. PSUs suck air from inside the case and thus contribute to expelling warm air. So, why would anyone put them at the bottom, where cool air is?

There's only one reason for me: better cooling for the PSU itself. Maybe important for high-end 1KW PSUs, but for more modest ones, I don't think it makes much sense.
____________

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31310 - Posted: 6 Jul 2013 | 22:30:05 UTC - in response to Message 31300.

Yoyo's ecm client is simply wrapped and why would ecm optimize for AMD when there are far more Intels running the project?

:) Good point. Then their application is simply better on AMD processors. The question is why don't they make a better (optimized) application for Intel processors, if that CPUs are the majority?

There are many little things which made me not to like AMD, for example their driver support is worse than Intel's.

What drivers? CPU???

No. :) I mean the drivers for the chipset around it. I recall some bad VIA drivers causing various problems, or the chipset was way slower than the competing Intels. There was a point when AMD decided to make chipsets for their own processors, because they weren't satisfied with the 3rd party chipsets. Intel did the same for the same reason, but earlier. NVidia chipsets were quite good, as far as I can recall them.

Surely you know that the bad capacitors had nothing to do with CPUs and were equally common on all motherboards, a result of the motherboard companies buying crappy, defective Chinese capacitors. It was a dark period for motherboards, many failed.

Yes, I know. The "defect" caused by a stolen formula of the electrolyte, which turned out to be incomplete at the time it was stolen. So the electrolyte was degraded by normal usage, the result of this degradation was a gas (hydrogen, if I remember it correctly), and that bloated them, reduced their capacity, forcing the FETs swiching more, causing more heat. This process, however, was faster at higher temperatures, so the hotter AMDs failed earlier. The problem had a "financial" component: those, who bought AMD for their lower prices, usually bought a cheaper mainboard for it, and these cheaper mainboards were made of cheaper components. But there were quite expensive mainboards at that time with bloated capacitors. One of my spare board is an Intel D975XBX, which had two of them (one at the northbridge, and one at the DIMM sockets). After I've changed these two to new ones, the MB's stability problems were gone. We had one or two Intel entry level server board failures (after 4 or 5 years of use) because the leaking electrolyte corroded the PCB to an unrepairable condition. I used to desolder the good capacitors (and sometimes the FETs also) from dead MBs since then, to have spare parts for fixing others.

A friend and the largest cruncher on our team during that period all Intel with ECS MBs (22 PCs). They all failed (Chinese capacitors), he was not happy.

The name ECS is sounds good, but their MBs are... not that good quality as the company name suggests.

Besides my business experiences, I'll relate a personal one from quite a while ago. I had an equal number of regular AMD desktop systems and very expensive large cache Intel servers spread around my house, one of each in 5 rooms to even out the heat distribution. Lightning storm: everything went dark. When the power returned 3 of the expensive Intel servers were dead, 2 MBs and 1 CPU. Within a week the other 2 failed. All of the AMD systems survived and never showed a sign of the catastrophe. Not a large statistical sample but telling. Pretty much the end of my trusting expensive Intel servers...

I can understand that. The PSUs in those servers weren't up to their tasks... regarding spike suppression. These spike suppressors could withstand only a limited number of spikes, depending on the power of the spikes they receive, so it is highly recommended to have power strips with built-in spike (or surge) suppressors.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31311 - Posted: 6 Jul 2013 | 22:34:13 UTC - in response to Message 31309.
Last modified: 6 Jul 2013 | 22:34:38 UTC

I don't understand the PSU at the bottom trend. PSUs suck air from inside the case and thus contribute to expelling warm air. So, why would anyone put them at the bottom, where cool air is?

There's only one reason for me: better cooling for the PSU itself.

You are right about that.

Maybe important for high-end 1KW PSUs, but for more modest ones, I don't think it makes much sense.

It does for all PSUs, since all PSUs' main electric characteristics are better at lower temperatures. (efficiency, ripple, noise, voltage stability, longevity)

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31312 - Posted: 6 Jul 2013 | 22:43:37 UTC - in response to Message 31311.

Yes PSU are an important part of a PC, but are mainly forgotten or cheap. The old T7400 (7 years old) has already a 80+ gold edition.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31313 - Posted: 6 Jul 2013 | 22:44:57 UTC - in response to Message 31311.
Last modified: 6 Jul 2013 | 23:07:36 UTC

I don't understand the PSU at the bottom trend. PSUs suck air from inside the case and thus contribute to expelling warm air. So, why would anyone put them at the bottom, where cool air is?

There's only one reason for me: better cooling for the PSU itself.

You are right about that.

Two other reasons, the heaviest component is then at the bottom of the case and most important, the CPU in a case with bottom mounted PS ends up in the upper back corner with both the back and top fans expelling the hot air. Hot air rises ;-)
3rd reason: Also puts the GPUs in a better position for cooling from the front and side fans.
4th reason: PS efficiency generally goes down as temperature rises (not to mention failure rate).

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31316 - Posted: 6 Jul 2013 | 23:02:40 UTC - in response to Message 31310.

No. :) I mean the drivers for the chipset around it. I recall some bad VIA drivers causing various problems, or the chipset was way slower than the competing Intels. There was a point when AMD decided to make chipsets for their own processors, because they weren't satisfied with the 3rd party chipsets. Intel did the same for the same reason, but earlier. NVidia chipsets were quite good, as far as I can recall them.

A long time ago and exactly why AMD started making their own chipsets. No issues since that I know of. Intel had the same problems for the same reasons.

Surely you know that the bad capacitors had nothing to do with CPUs and were equally common on all motherboards, a result of the motherboard companies buying crappy, defective Chinese capacitors. It was a dark period for motherboards, many failed.

Yes, I know. The "defect" caused by a stolen formula of the electrolyte, which turned out to be incomplete at the time it was stolen. So the electrolyte was degraded by normal usage, the result of this degradation was a gas (hydrogen, if I remember it correctly), and that bloated them, reduced their capacity, forcing the FETs swiching more, causing more heat. This process, however, was faster at higher temperatures, so the hotter AMDs failed earlier.

I think the P4s were hotter than any AMD yet.

The problem had a "financial" component: those, who bought AMD for their lower prices, usually bought a cheaper mainboard for it, and these cheaper mainboards were made of cheaper components.

If true (it wasn't for me) it's not exactly fair to compare low quality boards to high quality boards. So are you saying that AMD isn't good because you bought poor quality motherboards for them and those boards failed at a higher rate?

Besides my business experiences, I'll relate a personal one from quite a while ago. I had an equal number of regular AMD desktop systems and very expensive large cache Intel servers spread around my house, one of each in 5 rooms to even out the heat distribution. Lightning storm: everything went dark. When the power returned 3 of the expensive Intel servers were dead, 2 MBs and 1 CPU. Within a week the other 2 failed. All of the AMD systems survived and never showed a sign of the catastrophe. Not a large statistical sample but telling. Pretty much the end of my trusting expensive Intel servers...

I can understand that. The PSUs in those servers weren't up to their tasks... regarding spike suppression. These spike suppressors could withstand only a limited number of spikes, depending on the power of the spikes they receive, so it is highly recommended to have power strips with built-in spike (or surge) suppressors.

They all were on APC 600 sine wave UPSes (I bought a bulk purchase of them) and used them both at businesses and here. A couple of the AMD systems were just on surge strips. They weren't damaged and made me wonder about the importance of UPSes...

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31317 - Posted: 6 Jul 2013 | 23:03:07 UTC - in response to Message 31307.

I always install the PS with the fan on the bottom. There's enough of an airspace. Even if you turned it upside down the fan inside the PS would be sucking in the air and expelling it out the back of the case. Holes in the botton are OK, maybe more airflow but the floor level is also where the dirt/dust is. Take your pick, I've heard both sides argued. Big surprise...

After reading it again I understand. Off course to fan in the PSU sucks as well, so it could help as extra cooling faced up.

If the PSU at the bottom, there is no point to place it facing up. The idea of the PSU at the bottom is to make the PSU run cooler by having fresh, cool air directly from the outside. It's too bad that the PSU works like a vacuum cleaner, but it's almost the same when it's placed above the MB. The dirt/dust caused mainly by people walking around the PCs reaching approximately to 1m high, so there is a little difference if the PSU is at the bottom, or at the top of the MB, regarding the dirt.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31318 - Posted: 6 Jul 2013 | 23:17:49 UTC - in response to Message 31317.
Last modified: 6 Jul 2013 | 23:19:40 UTC

I always install the PS with the fan on the bottom. There's enough of an airspace. Even if you turned it upside down the fan inside the PS would be sucking in the air and expelling it out the back of the case. Holes in the botton are OK, maybe more airflow but the floor level is also where the dirt/dust is. Take your pick, I've heard both sides argued. Big surprise...

After reading it again I understand. Off course to fan in the PSU sucks as well, so it could help as extra cooling faced up.

If the PSU at the bottom, there is no point to place it facing up. The idea of the PSU at the bottom is to make the PSU run cooler by having fresh, cool air directly from the outside. It's too bad that the PSU works like a vacuum cleaner, but it's almost the same when it's placed above the MB. The dirt/dust caused mainly by people walking around the PCs reaching approximately to 1m high, so there is a little difference if the PSU is at the bottom, or at the top of the MB, regarding the dirt.

Then I will go for a case with holes in the bottom and a filter where the PSU fits.
I have seen many set-ups with the PSU in the top, with the fan down wards, even by top brands.
Sleep tight y´all.
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31319 - Posted: 7 Jul 2013 | 0:23:27 UTC - in response to Message 31316.
Last modified: 7 Jul 2013 | 0:57:48 UTC

The problem had a "financial" component: those, who bought AMD for their lower prices, usually bought a cheaper mainboard for it, and these cheaper mainboards were made of cheaper components.

If true (it wasn't for me) it's not exactly fair to compare low quality boards to high quality boards.

Sure. But that's life. I'm kind of a solution provider, and I think of a PC as a complex system, which should be bought, used and maintained for many years. Not like the majority of the customers, who doesn't understand that, they just want to lower their purchasing expenses, but they don't know that the cheap is the more expensive through to the end of its lifetime.

So are you saying that AMD isn't good because you bought poor quality motherboards for them and those boards failed at a higher rate?

No, I'm not saying that, that's nonsense. We're talking about how our past experiences influence our present choices (in our case, why am I an Intel "fan"). I'm saying that any system built by cheap components to minimize its purchasing expenses will fail earlier, than an optimal one. In the PC's world AMD and VIA were definitely targeted those, who wanted such PCs. (I was working in a custom made PC shop until 2002).
As I've said it earlier: A recent AMD CPU isn't as much power effective as a recent Intel CPU. That's the main difference between them, everything else can be deducted from this. There were times (P4), when it was quite the opposite. But today, a recent Intel CPU is more advanced as a CPU, but an AMD "CPU" is more advanced as an APU (CPU+GPU). But their APU abilities are irrelevant here at GPUGrid.

They all were on APC 600 sine wave UPSes (I bought a bulk purchase of them) and used them both at businesses and here. A couple of the AMD systems were just on surge strips. They weren't damaged and made me wonder about the importance of UPSes...

If they were APC Back UPS 600, they could make the situation worse, because they are switching (offline) type UPSes. They switch to batteries only when the power fails (you can hear the relays clicking inside when it happens). When a power failure is caused by lightning, this behavior can double the amplitude of the surge on the "protected" equipment. The electricity supply of sensitive equipment should be made uninterruptable by online UPSes (for example the APC Smart series), which are always run from the batteries, however the batteries are always charged from the mains. In that way the protected equipment is not connected to the source of the surges, and when a power outage happens, the switch to batteries won't cause an additional surge, because no such switch taking place.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31325 - Posted: 7 Jul 2013 | 13:17:19 UTC - in response to Message 31319.

So are you saying that AMD isn't good because you bought poor quality motherboards for them and those boards failed at a higher rate?

No, I'm not saying that, that's nonsense. We're talking about how our past experiences influence our present choices (in our case, why am I an Intel "fan"). I'm saying that any system built by cheap components to minimize its purchasing expenses will fail earlier, than an optimal one.

I was pulling your leg a little bit ;-)

They all were on APC 600 sine wave UPSes (I bought a bulk purchase of them) and used them both at businesses and here. A couple of the AMD systems were just on surge strips. They weren't damaged and made me wonder about the importance of UPSes...

If they were APC Back UPS 600, they could make the situation worse, because they are switching (offline) type UPSes. They switch to batteries only when the power fails (you can hear the relays clicking inside when it happens). When a power failure is caused by lightning, this behavior can double the amplitude of the surge on the "protected" equipment. The electricity supply of sensitive equipment should be made uninterruptable by online UPSes (for example the APC Smart series), which are always run from the batteries, however the batteries are always charged from the mains. In that way the protected equipment is not connected to the source of the surges, and when a power outage happens, the switch to batteries won't cause an additional surge, because no such switch taking place.

By sine wave UPS I was indicating they were the APC Smart-UPS 600. I was also oversimplifying a bit (saving some typing). All were APC Smart-UPS models. One of the failed Intel servers was on a Smart-UPS 900 and one was a Smart-UPS 1400 with both an Intel server and an AMD desktop attached. The Intel server failed and the AMD ran until it was obsolete for my use and I gave it to someone who needed a PC.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31326 - Posted: 7 Jul 2013 | 13:45:57 UTC
Last modified: 7 Jul 2013 | 13:47:25 UTC

I think we're all smart enough not to blame failing motherboards on either Intel or AMD ;) And from personal experience CPUs last pretty much infinitely (until you don't want to use them any more), unless you run them at serious voltages, but mainboards and PSUs fail far more often, followed by HDDs and GPUs.

And from the chipset and driver side I don't think there are any reservations against the current choices for AMD. The SATA controller may be a bit slower, there's no SSD caching availalbe like Intel SRT and there's no native PCIe 3. But none of this is a game stopper for crunching GPU-Grid. For POEM on anything but low-end cards you'd need the full 16x PCIe 3, though.

In conclusion: it really boils down to your project choices (what's the CPU performance differnce there) and electricity price (how much does the higher power consumtpion for the AMD CPU cost you).

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31328 - Posted: 7 Jul 2013 | 14:27:25 UTC - in response to Message 31326.

I think we're all smart enough not to blame failing motherboards on either Intel or AMD ;) And from personal experience CPUs last pretty much infinitely (until you don't want to use them any more), unless you run them at serious voltages, but mainboards and PSUs fail far more often, followed by HDDs and GPUs.

CPUs are definitely one of the hardier components. I had a few PII and PIII CPUs fail (but MOSTLY the ones on daughterboards and the culprit might have been the daughterboards they came on). Same result though, they were toast. The only AMDs I ever had fail were a couple of K-5 CPUs (their worst processor design ever) a long time ago. I would agree that current CPUs are one of the most bulletproof components in a computer. If the CPU cost savings allows you to buy a better MB and other components though the equation can change. Lately it's been HDs that have been most troublesome for me. I think that HD quality has been going down, down, down.

And from the chipset and driver side I don't think there are any reservations against the current choices for AMD. The SATA controller may be a bit slower, there's no SSD caching availalbe like Intel SRT and there's no native PCIe 3. But none of this is a game stopper for crunching GPU-Grid. For POEM on anything but low-end cards you'd need the full 16x PCIe 3, though.

I think there's an AM3+ ASUS board with PCIe 3 but not sure how available it is. Haven't done POEM for a while but it's another one I ran up to 500,000,000 credits. As I remember it did well on the HD 5850 and 5870 GPUs running 5x WUs on PCIe 2 x16 at a fairly high GPU usage. Einstein also runs pretty well in an x16 slot but takes a hit when run on a card in an x4 slot. Some difference also in GPUGrid bur not so much.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31329 - Posted: 7 Jul 2013 | 14:38:21 UTC

Part of our discussion here seems to revolve around two different philosophies. One philosophy is that since AMD processors are less expensive, then people are going with components that are all cheaper. My philosophy is that if someone's budget is say $1000, then save money on the less expensive CPU and install a better MB, PS and GPU. As far as power saving goes, a part of the cost savings can buy you a gold or platinum PS that offers power reduction for the whole system and can possibly net you a power reduction overall.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31330 - Posted: 7 Jul 2013 | 14:54:18 UTC - in response to Message 31328.

I think there's an AM3+ ASUS board with PCIe 3 but not sure how available it is.

Yes can be ordered in the Netherlands even, 215 euri.

https://www.asus.com/Motherboards/SABERTOOTH_990FXGEN3_R20/
____________
Greetings from TJ

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31331 - Posted: 7 Jul 2013 | 15:04:43 UTC - in response to Message 31329.

Part of our discussion here seems to revolve around two different philosophies. One philosophy is that since AMD processors are less expensive, then people are going with components that are all cheaper. My philosophy is that if someone's budget is say $1000, then save money on the less expensive CPU and install a better MB, PS and GPU. As far as power saving goes, a part of the cost savings can buy you a gold or platinum PS that offers power reduction for the whole system and can possibly net you a power reduction overall.

Interesting, if I take €1000 (instead of $1000) and I try to find rather good stuff, not the cheap things and not the expensive things. No Samsung, LG, etc. but Asus, Evga, Kingston, Seagate or WD, then €1000 is not enough.
Having two GTX660, an SSD, HD, DVD-ROM, OS, CPU-cooler, PSU already (roughly €800), I need at least €800 to get the rest of the parts.
To my opinion this must mean that ready of the shelf PC's for 400-1500 euri are junk for our needs.
____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31332 - Posted: 7 Jul 2013 | 16:54:43 UTC - in response to Message 31331.

Whatever your price target is. $1000 was simply an example. In the US I can build a pretty nice PC for $1000 though, especially considering watching for sales and rebates.

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31333 - Posted: 7 Jul 2013 | 17:09:26 UTC - in response to Message 31331.

To my opinion this must mean that ready of the shelf PC's for 400-1500 euri are junk for our needs.

I think with 1500 euro one can build a pretty decent 24/7 cruncher.
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31335 - Posted: 7 Jul 2013 | 18:05:51 UTC - in response to Message 31332.

Whatever your price target is. $1000 was simply an example. In the US I can build a pretty nice PC for $1000 though, especially considering watching for sales and rebates.

That is what I do. HD, SSD, DVD-ROM, Blue Ray, mice, all with 30-50% rebate at "today only" sales.
Off course I understand that you took it as an example. Yeah prices in the US are great, especially when converting to Euro. I don´t look at your prices anymore :-( Long time ago I have ordered memory in the US and I knew that shipping was relative expensive and that customs would intervene and charge me on top of it, but then it was still way cheaper then in the Netherlands.
The two GTX660 have cost me €420, so €580 for the rest is not possible in the Netherlands.

Vagelis
I can indeed build one for less than €1500, but these has other parts then a of the shelf one, that is what I meant.


____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31342 - Posted: 7 Jul 2013 | 20:57:05 UTC - in response to Message 31329.

Part of our discussion here seems to revolve around two different philosophies. One philosophy is that since AMD processors are less expensive, then people are going with components that are all cheaper. My philosophy is that if someone's budget is say $1000, then save money on the less expensive CPU and install a better MB, PS and GPU. As far as power saving goes, a part of the cost savings can buy you a gold or platinum PS that offers power reduction for the whole system and can possibly net you a power reduction overall.

My view point is actually neither. I don't think working with hard budgets makes much sense but instead strive to find an optimal solution in terms of total cost of ownership. I choose components approximately like this:

1. What do I need to fulfill the required role well, as cheap as possible?
2. Quick budget check. If failed skip step 3 and 4.
3. Could I add anything to substantially improve the value of the system? (performance, usability, upgradeability, running costs)
4. Is the price OK?
5. Compromises, if necessary.

This way I'm trying to get an optimal configuration first and foremost: do everything the machine needs to do, as cheap as possible as a starting point, but upgraded where it makes the most sense.

If I then end up with 450€ after compromises and someone says he can only spend 400€, I'd advice him to save a bit longer or not to spend that money on a PC upgrade at all (if it's not urgently needed), rather than getting a half-baked solution which might not work all that well.

Of course this is simplified. Actually the "as cheap as possible" does relate somewhat to the rough budget. And how I value a system also includes some estimate of how long it will stay usable. If spending 50€ more on a CPU with significantly higher single-threaded performance means this system will still feel snappy enough for 2 - 3 years longer than the alternative, then it's an upgrade which actually enhances the value. That's not easy to predict, though.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31343 - Posted: 7 Jul 2013 | 21:59:13 UTC

The T7400 has two Xeons E5430 and thus 8 real cores and runs quite cool. But these Xeons are old and are end of live as Intel said.
The most recent Xeons are to expensive, but how are the ideas´s about a previous generation Xeon?


____________
Greetings from TJ

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 31348 - Posted: 8 Jul 2013 | 8:59:37 UTC - in response to Message 31343.

I think your two E5430 Xeons will be pretty decent crunchers! Not high-end by any means, but the fact you have 8 real full cores (with FP units) means you'll have decent BOINC throughput. 8 full cores aren't easy to find!

Look here for this processor's relative performance in cpubenchmark.net. The fact that you have two CPUs means that, at least for BOINC usage, you'll have almost twice the performance, being somewhere here. Not bad at all!
____________

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31349 - Posted: 8 Jul 2013 | 9:22:38 UTC - in response to Message 31348.
Last modified: 8 Jul 2013 | 9:23:57 UTC

I think your two E5430 Xeons will be pretty decent crunchers! Not high-end by any means, but the fact you have 8 real full cores (with FP units) means you'll have decent BOINC throughput. 8 full cores aren't easy to find!

Look here for this processor's relative performance in cpubenchmark.net. The fact that you have two CPUs means that, at least for BOINC usage, you'll have almost twice the performance, being somewhere here. Not bad at all!

That is a nice overview Vagelis, thanks.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31351 - Posted: 8 Jul 2013 | 17:11:17 UTC - in response to Message 31335.

Skgiven has a nice idea of getting a cheap i3 or i5 and make a rig for GPU crunching only. But I know myself then I will do more with the system and then it won´t work. Not that I am depended on 1 system. I have taken a lot of old stuff home from work...Won´t do that anymore, unless there is a T7400 :)

I went for an even cheaper G2020 (~£50), rather than an i3 for ~£120, an MSI board with 3 slots (2 immediately useful, but the 3rd has potential if I use a riser) and a reasonable 80+ PSU (30decibels at400W).
My i7-3770K @4.2GHz is in an identical 2/3 PCIE slot sub-€100 motherboard with two GPU’s (was 3 for a while with a riser).
While the Gigabyte GA-Z87X-OC is an excellent motherboard and something I might consider if I wanted 3 or 4 top GPU’s (GTX780s) for a new build, my ambitions are rather more reserved, and I like the challenge of doing it on the cheap.
Going by the specs, even the ‘GA-X58-OC Force’ motherboard won’t allow you to have 4 GPU running at PCIE3X16. Anyway, it's an extreme end board and requires water cooling - it's for rich plumbers :)

The two GTX660 have cost me €420, so €580 for the rest is not possible in the Netherlands.

You can buy the following from Amazon.de:
    Two GTX 660’s (€182each) €364
    Corsair TX650 €75
    Motherboard ~€90
    8GB DDR3 €65
    CPU ~€60
    128GB SSD €78
    External USB2 DVDRW €30
    Antec Three Hundred Two €66
    - Total €828



____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31353 - Posted: 8 Jul 2013 | 20:47:52 UTC - in response to Message 31325.

By sine wave UPS I was indicating they were the APC Smart-UPS 600. I was also oversimplifying a bit (saving some typing). All were APC Smart-UPS models. One of the failed Intel servers was on a Smart-UPS 900 and one was a Smart-UPS 1400 with both an Intel server and an AMD desktop attached. The Intel server failed and the AMD ran until it was obsolete for my use and I gave it to someone who needed a PC.

That is very strange. However you should have asked APC for some compensation... There were no insurance on those UPSes?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31356 - Posted: 8 Jul 2013 | 21:44:32 UTC - in response to Message 31343.

The T7400 has two Xeons E5430 and thus 8 real cores and runs quite cool. But these Xeons are old and are end of live as Intel said.
The most recent Xeons are to expensive, but how are the ideas´s about a previous generation Xeon?

8 physical cores is nice for throughput, but essentially these Xeons are Core 2 Quad Q9450 with a slightly lower TDP (80 W vs. 95 W). Their performance is best compared to current CPUs for computationally dense code, which is not limited by
memory performance and where HT doesn't help much, and for code which doesn't use anything newer than SSE4.1.

Last gen Xeons are always a bad idea for BOINC because short of going to 4 sockets these are just the same as the desktop chips. Slightly different binning and configurations, but no fundamental differences. By going to the last gen you hardly save mooney, but pretty much always loose power efficiency and features. That's why I don't recommend Sandy Brdige for BOINC any more.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31360 - Posted: 9 Jul 2013 | 6:55:33 UTC - in response to Message 31353.

By sine wave UPS I was indicating they were the APC Smart-UPS 600. I was also oversimplifying a bit (saving some typing). All were APC Smart-UPS models. One of the failed Intel servers was on a Smart-UPS 900 and one was a Smart-UPS 1400 with both an Intel server and an AMD desktop attached. The Intel server failed and the AMD ran until it was obsolete for my use and I gave it to someone who needed a PC.

That is very strange. However you should have asked APC for some compensation... There were no insurance on those UPSes?

Homeowners insurance paid for most of it since there was enough damage to go well over the deductible.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31375 - Posted: 9 Jul 2013 | 21:55:28 UTC - in response to Message 31265.
Last modified: 9 Jul 2013 | 21:56:28 UTC

@Zoltan: I'd be very interested in an investigation whether Haswell scales better with HT than previous chips. The reason I suspect this: it's got more execution units! They're not terribly relevant to FP crunching, but in mixed workloads they should definitely show. And BOINC is the best real world usage case for mixed workloads.
I know this is not easy to measure, but so far I haven't seen any meaningful data besides the usual benchmarks. I also suspect Haswell will show quite some teeth under heavy server load.. but still no benchies.

Now, that I've fixed my random restarts (it turned out that my old memory module kit (KVR1333D3N9K2/4G) is not compatible with my new motherboard), I can do some benchmarks. How can I run mixed workloads? (Should I install two BOINC managers on this host?). What other benchmark do you want me to run? (I've read on a local hardware test site, that the Apache web server is actually slower on i7-4770K than on the previous CPUs.)

@Haswell in general: I think people are looking at this chip in the wrong way, or were simply expecting too much. I think Haswell is great for BOINC, even if you can't use AVX2 yet. Great in the way "a significantly better choice than Ivy, for the same price".

I do agree with you.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31379 - Posted: 10 Jul 2013 | 8:35:58 UTC - in response to Message 31351.

Skgiven has a nice idea of getting a cheap i3 or i5 and make a rig for GPU crunching only. But I know myself then I will do more with the system and then it won´t work. Not that I am depended on 1 system. I have taken a lot of old stuff home from work...Won´t do that anymore, unless there is a T7400 :)

I went for an even cheaper G2020 (~£50), rather than an i3 for ~£120, an MSI board with 3 slots (2 immediately useful, but the 3rd has potential if I use a riser) and a reasonable 80+ PSU (30decibels at400W).
My i7-3770K @4.2GHz is in an identical 2/3 PCIE slot sub-€100 motherboard with two GPU’s (was 3 for a while with a riser).
While the Gigabyte GA-Z87X-OC is an excellent motherboard and something I might consider if I wanted 3 or 4 top GPU’s (GTX780s) for a new build, my ambitions are rather more reserved, and I like the challenge of doing it on the cheap.
Going by the specs, even the ‘GA-X58-OC Force’ motherboard won’t allow you to have 4 GPU running at PCIE3X16. Anyway, it's an extreme end board and requires water cooling - it's for rich plumbers :)

The two GTX660 have cost me €420, so €580 for the rest is not possible in the Netherlands.

You can buy the following from Amazon.de:
    Two GTX 660’s (€182each) €364
    Corsair TX650 €75
    Motherboard ~€90
    8GB DDR3 €65
    CPU ~€60
    128GB SSD €78
    External USB2 DVDRW €30
    Antec Three Hundred Two €66
    - Total €828



Thanks for your suggestions skiven, appreciated.
But the GTX660 at Amazon.de where not able to be delivered to the Netherlands, some parts do and some don't even sold and shipped at Amazon. That's why I bought them in the Netherlands for more.
I will invest in an Intel CPU, therefor I want an six core with HT and will wait until they arrive as Zoltan mentioned.
For now I want a new rig to fit the two 660's nicely and that will be an AMD based one, as I already have the Sabertooth MOBO for it. I always search for the best or second best parts when they are in an action. Eventually they need to replace my old stuff, but I need those for working from home, so the old powers users keep around for a while, and it is nice that when they are powered on they will crunch a bit.
In the past I have looked and bought the cheaper parts, but I will never do that anymore due to bad experience.
Most of my rigs are Dell, not without reason, but I now lots of others think otherwise. Some are more than 10 years old and work still great, even all the fans. From brandless rigs, I had to replace fans within a year. Could absolutely be coincidence though.


____________
Greetings from TJ

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31385 - Posted: 10 Jul 2013 | 14:42:46 UTC - in response to Message 31379.

Some are more than 10 years old and work still great, even all the fans. From brandless rigs, I had to replace fans within a year. Could absolutely be coincidence though.

Totally depends on the fans you buy. For instance the Antec case I mentioned comes with 2 tri-speed fans. They generally last a long time. If you add more fans you can go cheap cheap and replace them more often or you can get quality fans and probably never replace them. BTW I've had to replace plenty of fans on Dell and other brand name systems too.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31389 - Posted: 10 Jul 2013 | 22:46:55 UTC - in response to Message 31385.
Last modified: 10 Jul 2013 | 22:48:10 UTC

Some are more than 10 years old and work still great, even all the fans. From brandless rigs, I had to replace fans within a year. Could absolutely be coincidence though.

Totally depends on the fans you buy. For instance the Antec case I mentioned comes with 2 tri-speed fans. They generally last a long time. If you add more fans you can go cheap cheap and replace them more often or you can get quality fans and probably never replace them. BTW I've had to replace plenty of fans on Dell and other brand name systems too.

Of course that happens, even Dell uses Samsung stuff as an example. I never go cheap, I have my brands I like and have good experience with and will select from them. So I do with coffee and pens but also with hardware. Those days Samsung has improved but I will never buy it anymore as experience in the past where bad.
That is the reason I have two EVGA GTX660's, I could have got them cheaper from another brand.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31390 - Posted: 11 Jul 2013 | 0:14:45 UTC - in response to Message 31389.
Last modified: 13 Jul 2013 | 13:22:31 UTC

Going back a bit to the APU discussion, the benefit of the A10-6800K over the A10-6700 is limited to the CPU’s top performance and supposedly 2133MHz RAM vs 1866MHz for the lesser model (which might help at some GPU projects). It’s not a processor I would recommend for several reasons. The 100W TDP is poor vs the 65W TDP of the A10-6700, which has the same GPU with the same GPU clocks and shader count. It’s there for people that have inexpensive electric/want a slightly faster processor (performance at any cost), but don’t want/need a discrete GPU, and chumps.
At a basic level iGPU’s don’t yet have enough oomph for anything other than lightweight crunching. It’s still the case that a basic dual core CPU and a low-end to mid-range GPU will outperform a high end iGPU. Unfortunately most reviewer sites aren’t paid to demonstrate this,
but you can see the reality here!

While both AMD and Intel CPU’s with integrated GPU’s make for good desktop/office systems they are not really much use for crunching, relative to discrete GPUs. BTW. I found that AMD iGPU processors ran surprisingly cool and quiet with the box set heatsink – even when crunching.

By the end of the year Kaveri 3rd Generation A series APUs should change things a bit, maybe up to a 30% improvement in terms of CPU performance/Watt. The APUs will be 28nm Graphics Core Next and the CPUs are supposed to bring 15 to 20% improvement per cycle (IPC). While this is relatively better than Intel’s i7-3000 to i7-4000 (5 or 10%) move AMD have to catch up on several 22nm Intel revisions. I thought this might bring in 35 to 45W APU desktop processors, but the speculation is still 65W, which is actually a good thing. For GPU crunching and especially mixed CPU/GPU, hUMA will be very interesting. It should at least remove some bottlenecks and better facilitate the devs, but I suspect it might find itself a special place with some projects. It’s really the first glimpse of the next step forward in computing, and it’s far too early to say if it will only increase GPU performance by ~40% or more than double it…
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31396 - Posted: 11 Jul 2013 | 8:53:53 UTC

What I have read and heard from other people the build in graphics from Sandy Bridge aren´t great either.
What I see is that a lot of computers (of the shelf) are build smaller and smaller. BOINC started as a project to use PC´s idle time on a CPU only. But now we have GPU´s to crunch really fast. And a lot of crunchers build
crunch-dedicated rigs.
Personally I think that build in features of two devices into one will always have a loss somewhere. The link that skgiven posted shows that clearly.
I thinks these APU´s are great for office use and browsing/mailing.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31417 - Posted: 11 Jul 2013 | 19:36:05 UTC

@APUs: comparing HD4600 to GTX650Ti doesn't make much sense other than to show that discrete GPUs can be much faster - at significantly higher cost (in transistors and power consumption). Apart from that the two are in completely different leagues.. it's a bit like comparing GTX650 to HD7970. Of course the fatter card wins, but otherwise it doesn't tell us much.

And the current Intel GPU are obviously still catching up, but are slowly getting there. Starting from Ivy they can crunch BOINC OpenCL. My HD4000 could do ~45k RAC at Collatz, but is shifted to Einstein now for 9.5k RAC. That's not much.. but it only consumes ~20 W and I have it anyway.

Looking further into the future the discrete GPUs will still always be faster, as long as they don't require fast communicatiion with the CPU. The reason is simply power consumption (up to 130 W shared with the CPU vs. up to 300 W for GPUs) and memory bandwidth (memory soldered onto a PCB near the processor can always be faster than socketed memory further away). Die stacking DRAM could provide an APU with massive bandwidth, and we could even argue: that if a GPU is allowed to pull 300 W, why isn't a CPU? But we'd still be looking at "300 W shared with the CPU or all for the GPU".

However, I can also see some really neat things happening. Soon CPU and GPU will get unified adress space, both are slowly merging as evidenced by AMDs fusion roadmaps for some time. Where could this lead? Imagine a CPU module not unlike Bulldozer, maybe increase the number of integer cores per module to 4 or 8, and instead of letting them share 2 FPUs let them share a bunch of GPU shaders (e.g. one GCN compute cluster) for floating point and vector operations. That's the ultimate Fusion!

As far as I know it's not yet on the roadmaps and you'd need to provide some low latency and special function units as well.. but the more sharing, the better the heavy number crunching will be.

@Zoltan: by "mixed" I meant a mix of integer and floating point operations. The easiest way to produce them would be to run x tasks of an integer-only project like ABC or Collatz and x tasks of some regular project. Measure overall throughput wiith HT on and off, then do the same on an Ivy (or Sandy). This would be enough to quantify if HT scaling became better under mixed loads.

Giving both the same ressource share could be enough to keep BOINC running x/x tasks.. but using 2 separate managers may be safer. This depends on how long you need to average WU runtimes to know how fast the setup is, which will depend on the projects choosen.

And since Einstein has responded well to HT in the past it would be nice to see HT scaling of Haswell vs. Ivy just for Einstein. I'm not sure if I would like to run such tests if I had a Haswell.. but these are the tests I think we're still missing :)

MrS
____________
Scanning for our furry friends since Jan 2002

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31421 - Posted: 11 Jul 2013 | 21:17:31 UTC

Something like this?

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31438 - Posted: 12 Jul 2013 | 17:07:54 UTC - in response to Message 31390.

Going back a bit to the APU discussion, the benefit of the A10-6800K over the A10-6700 is limited to the CPU’s top performance and supposedly 2133MHz RAM vs 1866MHz for the lesser model (which might help at some GPU projects). It’s not a processor I would recommend for several reasons. The 100W TDP is poor vs the 65W TDP of the A10-6700, which has the same GPU with the same GPU clocks and shader count.

While both AMD and Intel CPU’s with integrated GPU’s make for good desktop/office systems they are not really much use for crunching, relative to discrete GPUs. BTW. I found that AMD iGPU processors ran surprisingly cool and quiet with the box set heatsink – even when crunching.

By the end of the year Kaveri 3rd Generation A series APUs should change things a bit, maybe up to a 30% improvement in terms of CPU performance/Watt. The APUs will be 28nm Graphics Core Next and the CPUs are supposed to bring 15 to 20% improvement per cycle (IPC). While this is relatively better than Intel’s i7-3000 to i7-4000 (5 or 10%) move AMD have to catch up on several 22nm Intel revisions. I thought this might bring in 35 to 45W APU desktop processors, but the speculation is still 65W, which is actually a good thing. For GPU crunching and especially mixed CPU/GPU, hUMA will be very interesting. It should at least remove some bottlenecks and better facilitate the devs, but I suspect it might find itself a special place with some projects. It’s really the first glimpse of the next step forward in computing, and it’s far too early to say if it will only increase GPU performance by ~40% or more than double it…

Interesting read, thanks skgiven. Haven't paid a lot of attention to the A10 but your comments made me take a look.

From the ITPRO review: "Intel Haswell vs AMD Richland head-to-head":

"The A10-6800K scored 0.81 in our tests – an improvement on the 0.76 scored by the A10-5800K. The A10-6700 wasn’t far behind, scoring 0.79 in the same tests. It’s progress, but it’s not enough to match Intel’s Haswell-based Core i5s – in terms of pure application performance, these top-end APUs match Ivy Bridge-based Core i3 chips.

It’s in gaming where Richland APUs make up ground. The A10-6800K’s Radeon HD 8670d graphics core scored 90fps in our 1,366 x 768 Low-quality Crysis test, and 43fps when we upped the resolution to 1,600 x 900 and quality settings to Medium. In both tests that’s five frames faster than Haswell’s HD Graphics 4600 could manage, and even further ahead of Ivy Bridge.

The gap was even more pronounced in Just Cause 2: the A10-6800K averaged 64fps in the game’s low-quality benchmark, but the Core i7-4770K scored just 44fps.

AMD’s latest chips perform well without consuming much power. Our test rig, comprising 8GB of RAM and an SSD alongside the A10-6800K, consumed a miniscule 37W when idle and 108W when stress-tested. Our high-end Haswell machine required just 38W when idling – but this figure rocketed to 185W at peak load."

http://www.itpro.co.uk/desktop-hardware/19975/intel-haswell-vs-amd-richland-head-head/page/0/3

Like you say, not a lot of compute difference between the 6800K and the 6800K. The 35 watt TDP difference should make A10-6700K even more efficient.

"Conclusion

Haswell generates plenty of column inches for Intel thanks to its dominance at the top of the processor market but, unless you’re going to make full use of an expensive Core i7-4770K or i5-4670K, it’s worth looking on the other side of the fence.

After all, there’s plenty to like about AMD’s top-end A10-6800K APU. It’s got more graphical power than anything Intel can muster, it’s more frugal, and it’s much cheaper than Intel’s top-end Haswell chips. It can’t quite match Haswell when it comes to pure processing power, but that’s its only weakness – and, for many, it’ll be more than good enough.

It’s not as one-side as you might think. If you want pure processing power and don’t mind the cost then Haswell is for you. If it’s a balanced experience you’re after and graphics and budget are the priority then a top-end APU is the chip of choice."

http://www.itpro.co.uk/desktop-hardware/19975/intel-haswell-vs-amd-richland-head-head/page/0/4

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31471 - Posted: 13 Jul 2013 | 15:22:54 UTC - in response to Message 31438.
Last modified: 13 Jul 2013 | 15:29:21 UTC

If you look at the iGPU/APU reviews, several things stand out. CPU bound processing is generally better on the Intel processors, but gaming is better on AMD’s APU’s. At the same price range you need to compare the APU’s with i3 processors. In this situation the AMD processors are better for gaming and the Intel processors are better at most other things. However, AMD's competitive edge actually extends to the i7-4000 processors – it’s still the case that gaming on the integrated AMD GPU is better,
http://hexus.net/tech/reviews/cpu/57197-amd-a10-6700-32nm-richland/?page=6

The alternative to an iGPU/APU is a discrete GPU, so it’s reasonable enough to compare them. A discrete GPU not only consumes more power because it has it's own board, but because it’s more powerful (does more work). It’s the very lack of GPU transistors that makes the Intel hd4600 so uncompetitive against discrete GPU’s. The hd5200 compares better simply because it’s twice as big, but the increase in transistors requires a CPU downclock, suggesting Intel would struggle to add more graphics performance as is. I would be inclined to ignore the transistor costs, or argue that AMD have the better balance, and concern myself with purchase costs. The fact still remains that you can build a better system with a discrete GPU for less than an iGPU system.

If we are talking about crunching it’s better to think of performance per Watt of the system. The 15W pulled by an hd4600 at Einstein still has to be supported by the rest of the system. Even if you OC this iGPU it’s not going to compete well with a mid-range discrete GPU in terms of System performance/Watt.

While I don’t think this situation will continue, entry level GPU’s are set to disappear as Intel and AMD eat into that market on the laptop and desktop front, it will take time (years). This has actually been developing for a long time - A6 and A8. The hd4000 and hd5000 just introduce some tangible competition by Intel. Unfortunately a lot of ‘Release’ reviews showed the hd4600 to be a good iGPU against the latest APU’s, but a broader look has shown the APU’s can outperform the hd4600 in non CPU dependent situations. AMD's APU’s are better priced and the running cost/performance are better, depending on what you are doing.

When it comes to crunching there are only two projects that can presently utilize the iGPU’s for OpenCL, but with time this should change. More projects have used the APU’s and in my experience the AMD’s are a cooler while crunching flat out (CPU+iGPU). IIRC the GPU of an A6 got about 30K at POEM. While my hd4000 can get 6.5 to 8.5K at Einstein (depending on what else I’m doing) last year’s Trinity appears to be able to get 36K (but possibly only 11.3K/day). If it is 36K/day that would suggest Richland would be able to get >40K/day.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31512 - Posted: 14 Jul 2013 | 15:03:03 UTC

@Flashawk: no, I'd replace the FPU in the CPU modules by a compute unit array. Or, on second thought, I'd augment the FPU by this, as you'd still need a low-latency unit for special functions and serial code (anything not handled well by the CU). In the theoreticla APU you linked the GPU and CPU are still separate and communicate through the north bridge.

@"ITPRO-Review": that seems to be rubbish. Under load +150 W for the Haswell? How is the CPU going to do this while staying under 83 W TDP? Sounds more like they included a fat discrete GPU and forgot to tell us (no configuration given, except calling the Intel "high end" and the AMD not).

But they're trying to answer the wrong question anyway: A10-6800K and i7-4770K are simply not alternatives on the desktop. If you want/need maximum CPU power without going straight to socket 2011 or higher, you choose the 4770(K). If you just want/need "enough" CPU performance, you choose the A10-6700 or an i3 or i5. If you want serious gaming you need a discrete GPU anyway.. and should go for the i5. For casual gaming you'd be stupid not to choose the A10 (either one).

In essence: when buying a 4770(K) the iGPU is just a nice bonus which might help little or at least increase the resell value, even for us crunchers. Nothing more and nothing less. This doesn't really change when you consider the 4770R with the eDRAM cache and the biggest iGPU. It outperforms Richland, but you don't buy it because of its iGPU on the desktop!

This all changes in latop environments where both chips are power-limited and you can't just stick a discrete GPU in there.. but that's not what we're concerned with here.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31522 - Posted: 14 Jul 2013 | 19:41:14 UTC - in response to Message 31512.

It wasn't just the CPU's power, it was the entire system/machine that used 185W (at the wall).
Several bench tests will push a processor over it's TDP and it is a K model, so perhaps they had an OC.

"Our high-end Haswell machine required just 38W when idling – but this figure rocketed to 185W at peak load".
Ref: http://www.itpro.co.uk/desktop-hardware/19975/intel-haswell-vs-amd-richland-head-head/page/0/3
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31590 - Posted: 17 Jul 2013 | 15:20:44 UTC

From Tom's Hardware:

"Here's the bombshell we figured out from today's testing: for overclocking, a quad-core Haswell-based processor at 22 nm requires more cooling than a six-core Sandy Bridge-E CPU at 32 nm, even though its lower power consumption produces less heat. The back-up for this is that two of the coolers from our Sandy Bridge-E cooling round-up re-appeared today with far worse apparent performance. Most overclockers blame Intel’s newer integrated heat spreader and transfer material for this discrepancy. These days, cheap paste replaces solder for connecting the CPU die to the spreader.

Cross-compatibility between LGA 1150, 1155, and 1156 sinks theoretically makes it possible for us to test dozens of heat sinks and fans. Unfortunately, most solutions are too small to cope with the heat issues an overclocked Haswell-based CPU suffers."

http://www.tomshardware.com/reviews/best-heat-sink-haswell,3554-25.html

It also makes one wonder if all this heat buildup is only due to the poor transfer material or if Intel is being less than honest about the power draw of Haswell. Low power ratings sell processors...

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31604 - Posted: 17 Jul 2013 | 22:45:42 UTC - in response to Message 31590.
Last modified: 17 Jul 2013 | 22:57:13 UTC

It also makes one wonder if all this heat buildup is only due to the poor transfer material or if Intel is being less than honest about the power draw of Haswell. Low power ratings sell processors...

They use a poor (cheap) TIM because of the low TDP there's no need for better at 3.6GHz (considering everyday usage - no, crunching is not an everyday use). It's a shame however, that they don't use a better TIM (e.g. soldering) in the K series, which definitely made for overclockers (and costs more while some features are disabled). So the way to heavily overclock a K series CPU leads through IHS removal and TIM replacement - voiding warranty. In my opinion this is intentional from Intel's part, they don't want to give warranty for heavy overclockers.
BTW my tests with my new motherboard and CPU are going so well that I've replaced them in my old host (ASUS P7P55 WS Supercomputer + Core i7-870). The new configuration consumes only 384W (from the outlet) while crunching 2 GPUGrid (NATHAN_KIDKIXc22 + NOELIA_1MG_RUN1) and 4 Rosetta@home workunits simultaneosly. The CPU and the GPUs are about 60-65°C.
The configuration is:
PSU: Enermax Platimax 600W
CPU: Core i7-4770K (@3.5GHz - no overclock yet - factory made cooler)
MB.: Gigabyte GA-Z87X-OC
RAM: Kingston KVR16N11S8K2/8 2x4GB 1600MHz
SSD: Kingston V100 96GB
GPU: ASUS GTX-670-DC2OG-2GD5 x2

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31605 - Posted: 17 Jul 2013 | 22:50:24 UTC - in response to Message 31590.

Heat, I had a lot of problems with heat, the liquid cooler is back in the alienware and does okay when the CPU is doing nothing, 38-40°C, When running Rosetta then fast to 68-72°C.

If I could choose I would buy a CPU that stays cool when active, even if it would use 200Watt, then one with a low TDP and a lot of heat.
Anyway next project is to build the AMD driven rig with the two GTX660's in a big case with a lot of fans. I have checked a lot of them and the CM storm trooper will be it. Plenty of room, lots of fans, dust filters and both panels can be removed for easy building.
____________
Greetings from TJ

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31608 - Posted: 17 Jul 2013 | 23:56:14 UTC - in response to Message 31605.

Heat, I had a lot of problems with heat, the liquid cooler is back in the alienware and does okay when the CPU is doing nothing, 38-40°C, When running Rosetta then fast to 68-72°C.

If I could choose I would buy a CPU that stays cool when active, even if it would use 200Watt, then one with a low TDP and a lot of heat.
Anyway next project is to build the AMD driven rig with the two GTX660's in a big case with a lot of fans. I have checked a lot of them and the CM storm trooper will be it. Plenty of room, lots of fans, dust filters and both panels can be removed for easy building.


That doesn't make sense; if it used 200 watts, where would all the watts go? It can't disappear. It gets converted into heat.
____________
XtremeSystems.org - #1 Team in GPUGrid

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31611 - Posted: 18 Jul 2013 | 0:13:21 UTC - in response to Message 31608.

Heat, I had a lot of problems with heat, the liquid cooler is back in the alienware and does okay when the CPU is doing nothing, 38-40°C, When running Rosetta then fast to 68-72°C.

If I could choose I would buy a CPU that stays cool when active, even if it would use 200Watt, then one with a low TDP and a lot of heat.
Anyway next project is to build the AMD driven rig with the two GTX660's in a big case with a lot of fans. I have checked a lot of them and the CM storm trooper will be it. Plenty of room, lots of fans, dust filters and both panels can be removed for easy building.


That doesn't make sense; if it used 200 watts, where would all the watts go? It can't disappear. It gets converted into heat.

Off course it doesn't ;-) that's why I wrote; when I could choose...
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31624 - Posted: 18 Jul 2013 | 11:27:03 UTC - in response to Message 31590.
Last modified: 18 Jul 2013 | 11:30:59 UTC

My understanding is that Haswell Iris Pro is actually 2 chips, a separate CPU and a GPU on the one 1150 processor board,
http://media.pcgamer.com/files/2013/06/Haswell-Iris-Pro.jpg

Are the other Haswell's the same?
I guess there might be cooling issues with that. Just having one Heat Spreader is dubious.

Perhaps they will be on the one chip when they drop to 14nm?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31633 - Posted: 18 Jul 2013 | 19:02:47 UTC - in response to Message 31522.
Last modified: 18 Jul 2013 | 19:20:23 UTC

SK wrote:
It wasn't just the CPU's power, it was the entire system/machine that used 185W (at the wall).
Several bench tests will push a processor over it's TDP and it is a K model, so perhaps they had an OC.

"Our high-end Haswell machine required just 38W when idling – but this figure rocketed to 185W at peak load".
Ref: http://www.itpro.co.uk/desktop-hardware/19975/intel-haswell-vs-amd-richland-head-head/page/0/3

I know - that's why I brought this up. They can't just include some other hardware (or OC) in the Intel, not tell us about this, not include it in the AMD system and then attribute the difference in power consumption to the CPU. Well, I'm assuming that there is some other hardware not included in the AMD system, but that's because they don't tell us otherwise and because otherwise the numbers don't make any sense at all.

Beyond wrote:
It also makes one wonder if all this heat buildup is only due to the poor transfer material or if Intel is being less than honest about the power draw of Haswell. Low power ratings sell processors...

It's been measured: power consumption goes up roughly proportional to the performance increase (at the same clock speed), or significantly less than that in AVX2-heavy code (higher performane increase).

The problem with cooling is the insufficient TIM and the smaller area, over which the heat is released compared to e.g. Sandy-E. The Haswell cores got a bit larger compared to Ivy, but they've become almost tiny. Conducting the heat away from such small areas is harder and increases temperatures for a given cooling system.

SK wrote:
My understanding is that Haswell Iris Pro is actually 2 chips, a separate CPU and a GPU on the one 1150 processor board

No, the extra chip is the 128 MB eDRAM working as a L4$ in the top models. The i7 4770R as it too - I'm really curious to see what the cache could do for BOINC, server and professional apps. Intel promised gains in the high double digit percentages for some apps, which I don't doubt can be found in BOINC-land (POEM, Einstein, CPDN come to mind). Sadly it's as present as a ghost, even after being launched.

Edit: having read Tom's article.. they use 1.25 V and call it moderate. Yet that blows power efficiency completely out of the window. For 24/7 crunching I'm using 1.03 V on my 22 nm CPU and wouldn't recommend going any higher than 1.10 V. They're also using LinX, which gains ~70% performance per clock over Ivy thanks to AVX2. but that performance has to come from somewhere, i.e. it's about the most stressful piece of software for Haswell we currently have. Real world BOINC will produce less power consumption & heat.

Cooling Haswell is still not fun.. but not as dramatic as such articles might make you believe. Stay with moderate clocks & voltages and it will be fine. Except if you're TJ ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31634 - Posted: 18 Jul 2013 | 19:27:42 UTC - in response to Message 31633.

SK wrote:
My understanding is that Haswell Iris Pro is actually 2 chips, a separate CPU and a GPU on the one 1150 processor board
http://media.pcgamer.com/files/2013/06/Haswell-Iris-Pro.jpg

No, the extra chip is the 128 MB eDRAM working as a L4$ in the top models

Note that the larger (almost square) chip (on the left) is the 128MB eDRAM, and the smaller (rectangular) chip (on the right) is the Haswell CPU itself.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31639 - Posted: 18 Jul 2013 | 20:33:45 UTC - in response to Message 31634.

It's actually the other way around: link 1 and link 2.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31640 - Posted: 18 Jul 2013 | 20:47:42 UTC - in response to Message 31639.

It's actually the other way around: link 1 and link 2.

MrS

Wow!
Then this is a completely different Haswell CPU. I didn't thought that Intel will actually make 2 different chips (just disable the 'unnecessary' parts in the lesser chips)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31642 - Posted: 18 Jul 2013 | 21:21:04 UTC - in response to Message 31640.

Ideed, it's almost quadratic rather than rectangular like with GT1 or GT2 grapgics. Doubling the EUs from 20 to 40 increases the iGPU die size significantly. And especially with the numbers Intels sells it would be far too wasteful to only work with deactivated units in this granularity.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31646 - Posted: 19 Jul 2013 | 2:58:38 UTC - in response to Message 31604.

It also makes one wonder if all this heat buildup is only due to the poor transfer material or if Intel is being less than honest about the power draw of Haswell. Low power ratings sell processors...

They use a poor (cheap) TIM because of the low TDP there's no need for better at 3.6GHz (considering everyday usage - no, crunching is not an everyday use). It's a shame however, that they don't use a better TIM (e.g. soldering) in the K series, which definitely made for overclockers (and costs more while some features are disabled). So the way to heavily overclock a K series CPU leads through IHS removal and TIM replacement - voiding warranty. In my opinion this is intentional from Intel's part, they don't want to give warranty for heavy overclockers.

So the Intel desktop strategy is: minimal performance "upgrade", increase power consumption, charge twice as much as the competition for substandard construction. Is that the gist of it? We've seen this before from the big "I" (Only the Paranoid Survive).

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31650 - Posted: 19 Jul 2013 | 10:45:06 UTC - in response to Message 31646.

So the Intel desktop strategy is: minimal performance "upgrade", increase power consumption, charge twice as much as the competition for substandard construction. Is that the gist of it? We've seen this before from the big "I"

Tides are turning. We all have to bow to the fact that the desktop segment became less important (less profitable etc.) than the mobile segment, so the desktop strategy is subject to the mobile strategy. The desktop computing is the past, it has only a couple of years left.
It depends on how you define "performance". In the terms of computing efficiency, the 4xxx series is better than the 3xxx. However, there's no point in upgrading from the 3xxx series for those who already have that. The 4xxx series has a larger iGPU, so using the same manufacturing technology as the 3xxx series, it's quite logical that its power consumption will be higher.

(Only the Paranoid Survive).

And the lucky too:)
BTW the success of mobile computing made Intel (and AMD) paranoid, as their products are not as power efficient as the competition's.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31658 - Posted: 19 Jul 2013 | 15:26:20 UTC - in response to Message 31650.

If I understand correct, that if I would like a new CPU and not use the iGPU and be power effective, I could better buy a 3xxxx series?
____________
Greetings from TJ

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31665 - Posted: 19 Jul 2013 | 18:59:38 UTC - in response to Message 31658.

If I understand correct, that if I would like a new CPU and not use the iGPU and be power effective, I could better buy a 3xxxx series?

No, if you don't use the iGPU, it won't consume much energy.
If you intend to buy a new CPU (or GPU) for crunching, don't buy ones built on older technology.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31693 - Posted: 21 Jul 2013 | 12:02:28 UTC - in response to Message 31646.

So the Intel desktop strategy is: minimal performance "upgrade", increase power consumption, charge twice as much as the competition for substandard construction. Is that the gist of it? We've seen this before from the big "I" (Only the Paranoid Survive).

Nope. The strategy is "don't do more than you have to in order to stay profitable".

minimal performance "upgrade"

Increasing IPC by almost 10% is actually a lot, if the starting point of the optimization was as refined as Ivy Bridge! For an evolution of the architecture that's quite good, since the low-hanging fruit have pretty much already been picked. There are certainly diminishing returns if you try to push CPU performance further.

You could argue that you'd rather want more cores or something truely revolutionary.. but Intel would argue that more cores wouldn't benefit most users that much, especially not with dual channel memory, would blow the mainstream power budget and would happily sell you socket 2011 if you really want them.

For massively parallel FP code you could push much more aggressively for higher performance.. but AVX2 already adresses that. And pushing much further would result in something like including Larrabee cores.

increase power consumption

Nothing's wrong with that as long as performance also increases. Then you can achieve the same performance at lower clocks & voltages and improve load power efficiency. I don't really get what people expected from Haswell, at the same node as Ivy.. higher performance and lower power consumption? You can have either, but not both.

charge twice as much as the competition

... for a faster and more energy efficient CPU, sure. Compare prices for CPUs of roughly equivalent performance and it's going to be Piledriver vs. i5 or Richland vs. i3. You can make the point that AMD offers good value here (depending on what's important to you), but Intel charges nowhere near double the price for these models.

for substandard construction

What does the construcion matter if the product is still excellent? Sure, for the K models using the TIM can not be excused.. but for stock CPUs it hardly matters.

The 4xxx series has a larger iGPU, so using the same manufacturing technology as the 3xxx series, it's quite logical that its power consumption will be higher.

No, it's actually the CPU part that draws more power. The iGPU is completely power gated if it's not being used and practically doesn't draw any power at all.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,201,255,749
RAC: 7,520
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31697 - Posted: 21 Jul 2013 | 17:06:32 UTC - in response to Message 31693.
Last modified: 21 Jul 2013 | 17:12:21 UTC

The 4xxx series has a larger iGPU, so using the same manufacturing technology as the 3xxx series, it's quite logical that its power consumption will be higher.

No, it's actually the CPU part that draws more power.

I still have much to learn about Haswell. :)
For those, who want to compare the power consumpion of their 3xxx series (or other): according to HwMonitor, my 4770K@3.7GHz (1.085V) consumes 52.57W~56.62W (IA Cores: 43.71W~47.74W, Uncore: 9.55W~10.22W) while crunching 4 rosetta's and 2 GPUGrid tasks. (The iGPU is turned off)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31699 - Posted: 21 Jul 2013 | 19:44:04 UTC - in response to Message 31697.

Could you do a quick test and change CPU voltage? Maybe go straight for 1.00 V at default clock, I'd expect it to work. The reason I'm asking: from experience with my Sandy and Ivy I suspect the reported power draw does not take the actual voltage into account (so I don't know which voltage). I'm 100% sure about this for the reported power draw of my HD4000.

So in essence.. I'm not sure these reported power draw numbers are any good.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31765 - Posted: 27 Jul 2013 | 15:09:08 UTC

I have built my AMD rig (sooner than planned and in an old case) it has the Sabertooth mobo and FX8350 black edition with stock CPU cooler.
I like this set up as it is drawing only 230Watt when running 8 fightmalaria WU's.
That did never happened with my Bloomfield. The CPU temperature stays at 61°C with 8 cores full working. They are all at 100%, that is also better then I saw on the Bloomfield. When crunching GPUGRID with the GTX770 the CPU temperature does not change but power draw increases to 320Watt with non CPU tasks and 384W with 4 CPU tasks.
When the system is idle it is using 120W. This is awesome low compared to my Bloomfields.

The only negative is the noise of the CPU stock cooler, that is like a jet engine.
I am not saying I am an AMD fan now, but there will be a second one in the future.
____________
Greetings from TJ

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31767 - Posted: 27 Jul 2013 | 15:24:44 UTC - in response to Message 31765.

I just replied in some other thread.. 120 W idle is much better now than 190 W. Yet.. it's still far too much for a modern system, you should be seeing around 70 W.

MrS
____________
Scanning for our furry friends since Jan 2002

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31768 - Posted: 27 Jul 2013 | 15:37:56 UTC - in response to Message 31767.

I just replied in some other thread.. 120 W idle is much better now than 190 W. Yet.. it's still far too much for a modern system, you should be seeing around 70 W.

MrS

Yes I saw your post, 190W was my mistake. When idle its a bit varying between 101-120W. I will check power management.

But the PSU isn't modern that one is from 2009 a M600 80Plus Bronze.
____________
Greetings from TJ

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32275 - Posted: 25 Aug 2013 | 21:42:33 UTC - in response to Message 31768.

I've been reading some interesting performance observations regarding the choice of CPU for the Titan/GTX780.
Basically, it's been suggested that there might be some gain from using the Intel PCIE3 on-die controllers rather than an AMD chipset.

Can we give this due consideration by continuing this thought-provoking discussion here?

Ref: http://www.gpugrid.net/forum_thread.php?id=3440&nowrap=true#32274

FWIW, my TITAN does them in 4300-4600 with no threads reserved. I will try reserving a thread to see if there is any difference. This is not in the DP-enhanced mode. So it will automatically OC up as must as temps allow.

After more testing, I see that my TITAN takes 4520 seconds with or without a reserved thread. No difference at all.

That's goos news.
Zarck's Titan still needs 4700 secs. Then maybe the AMD architecture is to blame for that. The AMD FX CPU don't have integrated PCIe controller, it uses a Hypertransport link to the North Bridge. The AMD 990FX NB has "only" 2x PCIe 2.0 x16 support, while the Intel i7-3770 and 4770 has (only one) integrated PCIe 3.0 x16. The PCIe 2.0 x16 is quite enough for the GK104 (up to the GTX 680 and 770), however it could be hindering the performance of the GK110 based cards (GTX780 and Titan), because they have 50% and 75% (respectively) more CUDA cores than a GK104 based card.


____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Zarck
Send message
Joined: 16 Aug 08
Posts: 145
Credit: 328,473,995
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32279 - Posted: 25 Aug 2013 | 22:45:20 UTC - in response to Message 32275.

I expect the next generation to enter Amd PCI Express 3.0.



@+
*_*
____________

Profile Zarck
Send message
Joined: 16 Aug 08
Posts: 145
Credit: 328,473,995
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32280 - Posted: 25 Aug 2013 | 23:04:25 UTC - in response to Message 32278.

Some units with my Titan is not more than 66% load, why?

https://www.dropbox.com/s/0qrhfzoxkb5446g/GpuGridTitan.jpg

@+
*_*
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32282 - Posted: 25 Aug 2013 | 23:42:40 UTC - in response to Message 32280.
Last modified: 25 Aug 2013 | 23:46:07 UTC

You are running 8 Rosetta WU's - Try running 7 Rosetta WU's; set Boinc to use 99% of your CPU. Post your findings (including CPU usage from Task Manager, and run relative GPUGrid run times).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Zarck
Send message
Joined: 16 Aug 08
Posts: 145
Credit: 328,473,995
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32286 - Posted: 26 Aug 2013 | 9:09:19 UTC - in response to Message 32282.

While other GPUGRID units running at over 84% without changing the settings?

https://www.dropbox.com/s/stx56zuv50atzqj/GpuGridTitan2.jpg

If I run Folding I do not have this problem.

https://www.dropbox.com/s/t5fs17aq678uahq/Folding.jpg

@-
*_*
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32287 - Posted: 26 Aug 2013 | 11:49:12 UTC - in response to Message 32286.
Last modified: 26 Aug 2013 | 12:03:22 UTC

I cant see the names of the respective GPUGrid WU's but there have been several different WU types, and different work utilizes the GPU to different extents.

The Rosetta WU's might also tax the CPU more at certain stages of the run, or your chipset might be limiting performance somewhat. For any given GPUGrid WU, if you change the CPU usage from 100% to 99% you will either notice a change in GPU utilization or you won't. If GPU usage rises then its clear that the CPU usage is an issue. If you don't see any improvement then the CPU usage is probably not hindering GPU computation to any great extent (though you would really need to test this against several WU's to get an accurate performance measurement). This varies depending on the GPUGrid WU type and what the CPU is doing and possibly it's reliance on RAM...

Folding is a very different application to ACEMD. They have different GPU usages and use the CPU to different extents (ditto for apps at other projects such as MW, Albert, Einstein...). This thread is about CPU comparisons, so if you want to compare Folding to ACEMD start another thread in GPUGRID CAFE or Number Crunching.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 32403 - Posted: 28 Aug 2013 | 18:54:49 UTC

@Zoltans comment: there's also a latency penalty involved with having to go through the chipset northbridge on AMD versus PCIe controllers integrated into the CPU for Intel. Usually this doesn't matter much because the latency is still high on Intels compared to whatever CPU and GPu are doing locally, so GP-GPU apps have to by written to avoid communication and especially time-critical communication. And there's PCIe bandwidth, of course.

@Zarck: I agree with SK, this really doesn't fit this topic. One small comment, though: the GPU utilization at GPU-Grid varies with WU type and more specifically the size of the simulated system. It's normal to see lower utilization on the short queue, for example. So a certain range is totally expected, especially for high-end cards.

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Multicore CPUs : CPU Comparisons - general open discussion

//