1) Message boards : News : New badges! (Message 44825)
Posted 28 minutes ago by Profile Retvari Zoltan
Where did you find this?

I did not find this. I've made this.
2) Message boards : News : New badges! (Message 44822)
Posted 2 hours ago by Profile Retvari Zoltan
Noelia's study of ions binding to the myo-inositol monophosphatase enzyme:

1st RaymondFO
2nd Retvari Zoltan
3rd Stoneageman
4th HA-SOFT, s.r.o
5th Herb
6th Rick A. Sponholz
7th Roald
8th Beyond
9th Orange_1050
10th BruceR
11th ecafkid
12th Acey Pilot
13rd 5pot
14th Bedrich Hajek
15th petebe
16th TJ
17th Nikke
18th neilp62
19th Rion Family
20th Venec
21st IFRS
22nd Grumpy
23rd Bikermatt
24th Eagle07
25th Jozef J
26th Ken Florian
27th jjch
28th JugNut
29th s0m3wh4t
30th John

Nathan's study of S1PR1 receptor:

1st Stoneageman
2nd Retvari Zoltan
3rd Erik Postnieks
4th HA-SOFT, s.r.o
5th Venec
6th For the Universe ( Apaszko-Kaszkiety )
7th IFRS
8th 5pot
9th Ken Florian
10th Herb
11th Paul Raney
12th werwerdus
13rd GPUGRID Role account
14th Nikke
15th Bedrich Hajek
16th jlhal
17th comfortw
18th flashawk
19th eruda
20th Bikermatt
21st wdiz
22nd RaymondFO
23rd Profile PERPLEXER ~ Thomas Huettinger
24th John
25th Roald
26th Localizer
27th Snow Crash
28th Helmholdt
29th Alain Maes
30th Rayzor
3) Message boards : News : WU: CASP (Message 44817)
Posted 23 hours ago by Profile Retvari Zoltan
I notice with the CASP units I have around 80% GPU usage using windows XP and a gtx 960. I see that other's are having lower gpu usage as well. Will the casp units always use this low of a gpu usage?

Most probably they will. I think it's because the models these units are simulating have "only" 11340 atoms, while others have 2-5 times of this. The smaller the model is, the more frequent the CPU has to do the DP part of the simulation, resulting in lower GPU usage. (However there were high atom count batches with low GPU usage in the past, so a larger model could also need relatively high CPU-GPU interaction.)
4) Message boards : News : New badges! (Message 44813)
Posted 1 day ago by Profile Retvari Zoltan
Great news!
Could you make a list of the top contributors please (as for earlier badges)?
5) Message boards : News : Geforce 10 / Pascal app coming soon (Message 44784)
Posted 4 days ago by Profile Retvari Zoltan
I am getting some work units but they error out immediately:

It's because the GPUGrid app does not support Pascal GPUs yet.
6) Message boards : Server and website : SOS-Downloads stuck (Message 44774)
Posted 4 days ago by Profile Retvari Zoltan
It seems that everyone (including me) has this happening:

17 85 ms 83 ms 91 ms anella-val1-router.red.rediris.es []
18 * * * Request timed out.
19 83 ms 83 ms 86 ms grosso.upf.edu []

Is it the problem?

I assume you refer to #18: It's quite normal that some routers don't reply to requests which come from random computers on the internet.
I hoped to get some clues, but we're still just guessing the problem.
To investigate this issue some network traffic analysis on the packet level should be done by the network admins at the campus, and decide to take some countermeasures locally, or contact some other ISPs for a solution. But frankly I think this issue doesn't have that much impact on the project's throughput. I don't know how many sites are hosted on this server (besides ps3grid.net and gpugrid.net). I presume there are a lot of servers hosting a lot of webpages at the campus which are routed through the same devices. Their traffic may interfere GPUGrid's traffic, but it can't be analysed from outside.
7) Message boards : Server and website : SOS-Downloads stuck (Message 44762)
Posted 5 days ago by Profile Retvari Zoltan
My trace route looks very similar after the first couple of hops:
Tracing route to www.gpugrid.net [] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms [] 2 16 ms 16 ms 16 ms lo1.bsr0-zugliget.net.telekom.hu [] 3 16 ms 16 ms 16 ms 4 17 ms 16 ms 17 ms 5 19 ms 16 ms 16 ms 6 24 ms 23 ms 23 ms 7 22 ms 22 ms 22 ms 8 28 ms 28 ms 28 ms be2974.ccr21.muc03.atlas.cogentco.com [] 9 33 ms 34 ms 34 ms be3072.ccr21.zrh01.atlas.cogentco.com [] 10 46 ms 46 ms 45 ms be3080.ccr21.mrs01.atlas.cogentco.com [] 11 58 ms 58 ms 57 ms be2354.ccr21.vlc02.atlas.cogentco.com [] 12 62 ms 61 ms 62 ms be2339.ccr22.mad05.atlas.cogentco.com [] 13 63 ms 62 ms 63 ms be2853.rcr11.b015537-1.mad05.atlas.cogentco.com [] 14 63 ms 62 ms 63 ms 15 159 ms 74 ms 74 ms CIEMAT.AE1.cica.rt1.and.red.rediris.es [] 16 78 ms 77 ms 77 ms CICA.AE1.uv.rt1.val.red.rediris.es [] 17 85 ms 83 ms 91 ms anella-val1-router.red.rediris.es [] 18 * * * Request timed out. 19 83 ms 83 ms 86 ms grosso.upf.edu [] 20 84 ms 83 ms 83 ms grosso.upf.edu [] 21 83 ms 91 ms 84 ms grosso.upf.edu [] Trace complete.

I'm suspecting that one of my hosts has had a stalled download, and that made it crunch for Einstein@home for awhile. But these glitches usually happen to my hosts almost only when new workunits become available after a near-empty period. That's when the ghost workunits are appear too. Probably too many hosts are connected / trying to connect to the server at these time periods. Perhaps it looks like a DDOS attack for some firewall/router in the way.
8) Message boards : Server and website : SOS-Downloads stuck (Message 44756)
Posted 6 days ago by Profile Retvari Zoltan
While I don't think the staff of GPUGrid could do anything about your HTTP timeout problem, out of curiosity I ask you to run a very basic network diagnostics:
If you have a Windows based PC on the same network as your crunching box, please open a command prompt and type

ping www.gpugrid.net -n 100

You can do it on Linux also, but I'm not familiar with its command syntax (the -n 100 parameter tells the ping command to try 100 times).
You'll see a lot of (exactly 100, if everything's going well) messages like:

Reply from bytes=32 time=83ms TTL=49

Then, at the end:

Ping statistics for Packets: Sent = 100, Received = 100, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 83ms, Maximum = 88ms, Average = 83ms

These are the actual results of my host, I'm curious about your statistics.
I expect your loss of packets and the round trip times be significantly higher than what I experience.
Unfortunately these numbers do not reveal the device which is responsible for your problem, but I'm quite confident in that it's closer to your end (most probably it's at your ISP) than to the GPUGrid site (in this case much more users would have such difficulties).

You could also try a traceroute command:

tracert www.gpugrid.net

Which gives you a list of the devices between your end and grosso.upf.edu (on which the gpugrid.net project resides).
Perhaps this list could help us to figure out what's wrong. Especially if it gives you very different results when you run it multiple times.
In some cases these errors are simply caused by network congestion (when the ISP has limited bandwidth to certain destinations), but it could depend on the time of the day. On your end however, P2P file sharing applications or appliances, a faulty router/switch could cause such strange errors (but I'm sure in this case there would be problems with other sites as well).
9) Message boards : Graphics cards (GPUs) : 2 Projects For One GPU? (Message 44754)
Posted 6 days ago by Profile Retvari Zoltan
Okay so I've tested ubuntu and the usage is only 80%... doesn't ubuntu have no WDDM?
You should set the SWAN_SYNC environmental value under Linux too, to get the maximum possible GPU usage. But you should set it for the user which runs the BOINC manager. I'm not into Linux at all, but I've seen a post about it a couple of months ago.
10) Message boards : News : Geforce 10 / Pascal app coming soon (Message 44753)
Posted 6 days ago by Profile Retvari Zoltan
... perhaps you could consider excluding Pascal GPUs from work allocation for the time being?


Next 10