Advanced search

Message boards : News : Windows apps restored

Author Message
Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 50093 - Posted: 27 Jul 2018 | 15:59:09 UTC
Last modified: 28 Jul 2018 | 8:35:02 UTC

We restored the windows applications.

If you get errors such as "197 (0xc5) EXIT_TIME_LIMIT_EXCEEDED", please try these:

- R̶e̶r̶u̶n̶ ̶b̶e̶n̶c̶h̶m̶a̶r̶k̶s̶
- Reset the project
- Wait the problem out

If nothing works and you want to fix right away, there is a more complex workaround editing client_state.xml

The problem is somewhat explained here.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,598,561,851
RAC: 8,771,964
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50096 - Posted: 27 Jul 2018 | 17:22:35 UTC - in response to Message 50093.

Please also see my note in the old thread. Benchmarks won't help (they're CPU only), but increasing <rsc_fpops_bound> will - either globally on the server, or individually in users' machine caches in client_state.xml

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,598,561,851
RAC: 8,771,964
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50097 - Posted: 27 Jul 2018 | 17:29:41 UTC - in response to Message 50093.

The problem is somewhat explained here.

Related, but significantly different in detail.

Eric Mcintosh's Sixtrack apps at LHC are CPU only, and the problem is that the project didn't account for tasks which completed unexpectedly quickly - the simulated particles ran into the simulated tunnel wall. We don't have that problem here.

tullio
Send message
Joined: 8 May 18
Posts: 190
Credit: 104,426,808
RAC: 0
Level
Cys
Scientific publications
wat
Message 50118 - Posted: 28 Jul 2018 | 6:18:59 UTC

Latest try
Stderr output

<core_client_version>7.12.1</core_client_version>
<![CDATA[
<message>
(unknown error) - exit code -80 (0xffffffb0)</message>
<stderr_txt>
# GPU [GeForce GTX 1050 Ti] Platform [Windows] Rev [3212] VERSION [80]
# SWAN Device 0 :
# Name : GeForce GTX 1050 Ti
# ECC : Disabled
# Global mem : 4096MB
# Capability : 6.1
# PCI ID : 0000:01:00.0
# Device clock : 1392MHz
# Memory clock : 3504MHz
# Memory width : 128bit
# Driver version : r397_05 : 39764
# GPU 0 : 76C
# GPU 0 : 77C
# GPU 0 : 78C
SWAN : FATAL : Cuda driver error 700 in file 'swanlibnv2.cpp' in line 1965.
# SWAN swan_assert 0

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,598,561,851
RAC: 8,771,964
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50120 - Posted: 28 Jul 2018 | 6:46:42 UTC - in response to Message 50118.

Sadly, this is an individual problem on your computer, and nothing to do with the system-wide problem we're discussing as 'News'.

Please ask for help in the Number Crunching area instead.

tullio
Send message
Joined: 8 May 18
Posts: 190
Credit: 104,426,808
RAC: 0
Level
Cys
Scientific publications
wat
Message 50121 - Posted: 28 Jul 2018 | 7:47:32 UTC - in response to Message 50120.

SETI@home GPU tasks run perfectly on the same computer with AMD A10-6700 CPU and the same GTX 1050 Ti. The problem is not my computer , is your application which runs instead perfectly on my older and slower Linux boxes.
Tullio

Toni
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Send message
Joined: 9 Dec 08
Posts: 1006
Credit: 5,068,599
RAC: 0
Level
Ser
Scientific publications
watwatwatwat
Message 50122 - Posted: 28 Jul 2018 | 7:51:29 UTC - in response to Message 50121.
Last modified: 28 Jul 2018 | 7:51:39 UTC

Thanks. ~12h hours ago I increased the rsc_fpops_bound for newly submitted WUs.

lohphat
Send message
Joined: 21 Jan 10
Posts: 44
Credit: 704,692,359
RAC: 1,198,087
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50159 - Posted: 30 Jul 2018 | 17:34:16 UTC

The WU I'm getting have HUGH estimated completing times of 12d+ and are counting down at 1m50s time remaining per second.

So something's overestimating the time pretty poorly.

And yes, benchmarks were run during the WU suspension period so they're current.

Richard Haselgrove
Send message
Joined: 11 Jul 09
Posts: 1576
Credit: 5,598,561,851
RAC: 8,771,964
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 50164 - Posted: 30 Jul 2018 | 18:01:41 UTC - in response to Message 50159.

I have a machine in that state too. It has

<app_name>acemdlong</app_name>
<version_num>922</version_num>
<flops>100183256589.822480</flops>

<app_name>acemdshort</app_name>
<version_num>922</version_num>
<flops>775469970340.223750</flops>

100 GFlops and 775 GFlops respectively - ouch. (card is a 1050Ti which NVidia's marketing department rate at 2138 GFlops - don't believe a word of that.)

The fly in the ointment is a client DCF of 29.28 - must have come from running that short task, although I didn't take the statistics off that one. That's giving me the 'remaining time estimate' of 14 days currently.

It's mildly annoying, because it forces the new task to run immediately (EDF mode) rather than allowing my other projects' tasks to finish naturally, but it's actually nothing to worry about - it will sort itself out naturally as the tasks flow through.

tullio
Send message
Joined: 8 May 18
Posts: 190
Credit: 104,426,808
RAC: 0
Level
Cys
Scientific publications
wat
Message 50642 - Posted: 4 Oct 2018 | 1:58:39 UTC

After a 1809 Windows 10 upgrade to my HP desktop with AMD A10-6700 CPU and GTX 1050 Ti board, a GPU task is running on it with nominal parameters, 75 C temperature and 66% fan, 1683 core clock. I don't know if this is only a happy coincidence.
Tullio
____________

Post to thread

Message boards : News : Windows apps restored

//