1) Message boards : Server and website : Again problems with upload server? (Message 47502)
Posted 2 days ago by Profile Retvari Zoltan
... meaning the next person can't start computing without the previous WU in the chain being uploaded.
Don't worry about that. While there are unsent workunits in the queue, everybody can have their next piece of work.
2) Message boards : Graphics cards (GPUs) : PCI-e 16x PCI-e 4x (Message 47495)
Posted 6 days ago by Profile Retvari Zoltan
Anyway, at 95% GPU use the PCIe bandwidth is only at 1%. The most I've EVER seen it at for ANY project was 15% regardless of 16X or 1X.

GTX 1080 @ 2000 MHz / 4763 MHz, Windows 10 v1703, NVidia v382.05, PCIe3.0x16, Bus usage: 30-31%
GTX 1080 Ti @ 1974 MHz / 5454 MHz, Windows 10 v1703, NVidia v382.05, PCIe3.0x16, Bus usage: 33-34%
3) Message boards : Graphics cards (GPUs) : PCI-e 16x PCI-e 4x (Message 47489)
Posted 7 days ago by Profile Retvari Zoltan
But when the 1050 is installed alone it produces more heat then on the second slot...
A PCIe3.0x16 connection is roughly 8 times faster than a PCIe2.0x4, that could cause the difference in processing speed and heat dissipation. The other factor is the number of other (CPU) tasks running simultaneously. In my experience if I run more than 1 CPU task it does not "worth it" regarding the host's overall RAC, as it reduces more the host's GPU throughput (RAC) than it increases the host's CPU throughput (RAC) (If the host has a high-end GPU). Of course there could be other reasons to crunch CPU and GPU tasks on the same host, but regarding performance it does not worth it. If GPU performance is the main reason for building a rig it should have only one high-end GPU, but it does not need a high-end CPU and/or MB (with more PCIex16 connectors); it's better to build as many PCs as your space / budget allows.
4) Message boards : Graphics cards (GPUs) : PCI-e 16x PCI-e 4x (Message 47487)
Posted 8 days ago by Profile Retvari Zoltan
I run three dedicated machines. One of them has 5 GPUs. Only one of them is in a full length (16X) slot. One of them is in an 8X slot. The rest are in 1X slots with riser ribbons.

BOINC apps do not need to fully utilize all lanes.
Every BOINC app is different. The GPUGrid app is *very* different. Different workunit batches in GPUGrid could need *very* different PCIe bandwidth. Even the WDDM alone can shave off 10-15% performance. That's why I used exclusively Windows XP x64 for crunching, until the Pascals came.

The card does all the work...
That's not true for the GPUGrid app, as the double precision arithmetic is done on the CPU, moreover there are batches which apply extra forces to the atoms, and this is done on the CPU.

...and only needs 1X for communicating. The other 15 lanes are for processing video during videogame play which BOINc does not use.
There's no such dedicated lane in PCIe. If there are more lanes, every app (regardless if it's a game, CAD, folding@home, or some BOINC related app) will utilize it. That's one of the key features of PCIe. The performance gain could be in the range from negligible to a direct ratio of the available lanes, but it's related to the GPU application, not to the PCIe architecture.

I run a GeForce GT 710, two 620s, a 520 and a 420. The 710 has the fastest chip (954 Mhz) but only 1Gb Vram/64-bit memory interface. Also has no processor clock, just a graphics clock.
Fast does not equal to clock frequency regarding GPUs. A GTX 1080 Ti could be 30 times faster than a GT 710, while it has only 1.5 times the clock frequency of a GT 710. The faster the GPU, the easier the app running on it could hit the PCIe bandwidth limit.

The 420 has 2Gb Vram/128-bit memory interface. This card I use to run the display because it has the most memory/memory bandwidth/speed. It also has an 80mm fan. It also uses the most power. The other cards fall somewhere in between having 1-2Gb Vram but only 64-bit memory.

That being said, the only card that will accept and run GPUGrid tasks is the GT 710.
The GPUGrid app and project is a power hungry kind.
5) Message boards : GPUGRID CAFE : Cancer and good bye (Message 47477)
Posted 10 days ago by Profile Retvari Zoltan
caffeineyellow5,

I know I'm probably too late for you to see this, but I just wanted to say that... God bless you and thank you for your contributing to gpugrid. Looking at this post has made me extremely motivated for gpugrid and I vow to keep up my part at dedicating myself to this, to help people like you.

Take it easy up there (insert heart here)

+1
There are no words.
6) Message boards : Graphics cards (GPUs) : PCI-e 16x PCI-e 4x (Message 47476)
Posted 10 days ago by Profile Retvari Zoltan
I have 2 GTX 1050 Ti 3Gb on a Gigabyte AB350MHD3 Motherbaord with an AMD Ryzen 5 1400 CPU on Windows 10.
I think you refer to host 430643. This host now have only one GTX 1050 Ti, but I've checked your previous tasks, and it had two GPUs before, one GTX 1050 Ti and one GTX 1050 (without the "Ti": see task 16353457, 16355260). These are two different GPUs, the main difference is the number of CUDA cores: 768 vs 640, which explains the different computing times you've experienced, but there's more.

I have noticed that when processing WU.( one core / WU, Core working on GPU is ranging 70-80 % core usage).
GPU usage is misleading, if you want better estimation you should look at the GPU power measurement.
Different GPU models have different power consumption (and TDP).

The first GPU card gets hot and GPU processor is at 90% most of the time.
If this PC has a tower case, then the hot air from the lower card heats the upper card. Cards which blow the hot air out directly through the rear grille are better in this regard (they don't blow the hot air into the case). (There's no such GTX 1050 / GTX 1050 Ti as far as I know)

This GPU is connected to the first PCI-e which works at PCI-e 3.0 16X.
With bus interface load averaging 30%.
The second card does not seem to get as hot as the first and the GPU processor is at 90% most of the time also.
This GPU is connected to the second PCI-e 16x Slot which works at PCI-e 2.0 4x.
With bus interface load averaging 30%.

Now, since both the first and second have their GPU processor usage ranging in the 90% and since the bus interface load is averaging 30% should both cards not heat up the same?
They would not heat up the same even if they would be the same GPU.

For info, the second card takes a few additional hours to complete a WU than the first.
That could be caused by the narrower PCIe bandwidth, but in your case simply the lesser GPU takes longer.

Also, Since it is evident that there is no bottleneck on the bus interface , does this mean the cards perform the same for this type of WU even though the bandwidth between GPU / CPU is narrower?
Yes. There are two factors of it:
1. the present workunits does not need too much interaction between the GPU and the CPU
2. Your GPUs are not that fast to make the PCIe bandwidth bottleneck noticeable

Now, the big question, since both cards are having the same processor usage, and there is no bottle neck, should the WU not finish at the same time also?
They should, but your past workunits show different GPUs, so you might be missed the "Ti" on one of them.

I might not be looking at the right information from GPU-Z.
GPUz should show the different models. (One with Ti, and one without.)
7) Message boards : Graphics cards (GPUs) : GPU Berechnung wird gestoppt (Message 47443)
Posted 14 days ago by Profile Retvari Zoltan
Hallo allerseits,

meine Grafikkarte hört immer wieder auf zu rechnen , obwohl nichts bei Boinc keine Begrenzung eingestellt ist.

Habe eine Geforce GTX 970.

Ist das Problem bekannt ?

Ja. Dies ist ein Fehler in der Applikation.

Wenn ja wie kann ich es abstellen ?

Es gibt keine direkte Einstellung, um dies zu beseitigen.
Versuchen Sie, die Taktfrequenz Ihrer GPU zu reduzieren.

Kann man die Arbeitspakete nicht halbieren.
Die Laufzeit wäre so mit 4 Stunden viel besser zu händeln.

Es ist unmöglich.

Google translate is my friend.
8) Message boards : Graphics cards (GPUs) : GTX 1050 Ti? (Message 47401)
Posted 16 days ago by Profile Retvari Zoltan
Since my new BOINC machine will be in a 2U server case, I'm limited to low-profile graphics boards. I considered 2U cases that have provisions for horizontally-mounted, full-sized boards which would require a riser card, but there were too many uncertainties with that approach.

This is what I'm looking at - an MSI GeForce GTX 1050Ti:

https://www.newegg.com/Product/Product.aspx?Item=N82E16814137081

I know this card isn't state of the art.
The GTX 1050Ti is a Pascal (GP107) GPU, so it's state of the art (until the release of Volta GPUs, but that will make the GTX 1080 and the GTX 1080Ti also "obsolete").

And, I know it is about half the power as the GTX 1080.
That is the GTX 1060. The GTX 1050Ti has less computing power than the third of a GTX 1080 (768 vs 2560 CUDA cores, and lower clocks).

1) Do you see any issues with this board with respect to using it for GPUGrid?
This is a low middle range card, so it's slow.

2) Is it already considered "obsolete" and I just shouldn't bother?
It's not obsolete, but it's not intended for such demanding applications like GPUGrid. This specific card will emit all of its heat inside the case, so you have to make sure that the case will have proper airflow. Blade servers tend to have cooler fans with very high (up to 15.000(!)) RPM which emits unbearable noise. A 2U server probably has bigger and lower (~5000) RPM fans, but it still not recommended to live nearby such equipment.

3) Do you know of any low-profile graphics boards that would have better performance?
Passive cooled computing cards exists (designed for rackmount cases with low clocks and low voltages to achieve low heat dissipation), but they are much too expensive for BOINC use.
9) Message boards : Number crunching : Gridcoin (Message 47399)
Posted 17 days ago by Profile Retvari Zoltan
So with Gridcoin up 35% in a day pretty much 100 folding us early investors' money, does anyone have any predictions of where this will end up?
As far as I understand, Gridcoin differs from BitCoin fundamentally, as GridCoin is a FIAT money (there could be made as much as we want), while finite BitCoins could be made (=be mined). So compared to any FIAT money (crypto currencies, or real-world currencies) BitCoin's exchange rate would rise infinitely (in theory), especially after all have been mined. This is not the case of GridCoin, as it is a FIAT money. Its exchange rate is completely made up by the behavior of the parties of (crypto) coin exchange, so you can't predict its exchange rate, as there are many many people (and currencies) involved. You should not trust any predictions, even those I imply at the end of this post. I wonder if it ever will come to light what happened this time (if there was some kind of external event, or it purely comes from the behavior of coin exchange) which caused this dramatic spike in GRC's exchange rate. If it's based only on the coin exchange, then those who have many thousand GRCs to sell for USD should do it now, but do it in many small (~1000 GRC) packets in a rising exchange rate (by 0.001 USD/GRC) to motivate GRC buyers (as they would feel that they have to buy as soon as they can, or else it will go even higher. If you put all your GRCs in a single packet that would make buyer to feel safe and delay, meanwhile those who want to sell their GRCs fast will put theirs in at a lower exchange rate). But don't blame me if you sell all of your GRCs, and GRC doubles its exchange rate next week, as this could be caused by your actions itself. But it's completely possible that the exchange rate will return to its previous level.
10) Message boards : Number crunching : Gridcoin (Message 47380)
Posted 22 days ago by Profile Retvari Zoltan
Not yet, but I've successfully registered on https://c-cex.com.
This is my primary problem; I am waiting for their confirmation e-mail since last Friday.
I insisted on their customer support several times without any success / answers.
Have you checked your spam folder?

Holy crap the price just keeps going up, we're almost at 6 cents per gridcoin
That’s the reason I would like to sell now.

Surely the high exchange rate is the first thing that catches your eyes, but you should see the trade volume at the bottom of the same graph. If the volume is low, you could not sell your coins instantly at the high exchange rate, as there's no call for GRC at that high exchange rate. If you want to sell your coins instantly, you have to set the exchange rate of your sell orders to meet the entries in the buy orders tab, but in this case you accept the lower exchange rate set by the buyer, and this would make the exchange rate decrease.


Next 10