Advanced search

Message boards : Number crunching : It's time for a CUDA 11.1 app to support Ampere

Author Message
Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1031
Credit: 35,627,807,483
RAC: 74,125,343
Level
Trp
Scientific publications
wat
Message 57008 - Posted: 23 Jun 2021 | 13:59:33 UTC
Last modified: 23 Jun 2021 | 14:01:50 UTC

So with Pop Piasa's post here... http://www.gpugrid.net/forum_thread.php?id=5217&nowrap=true#56930

Here's a bit of promise for owners of Ampere GPUs: https://www.acellera.com/index.php/2021/04/06/release-of-acemd-3-4/

This new version brings support to the latest NVIDIA GPUs, including the Ampere architecture, as well as performance improvements. The simulation speed has been benchmarked against several systems at typical production conditions on different GPU devices (including GTX 1080, GTX1080 Ti, RTX 2080 Ti and RTX 3090). For the DHFR benchmark, on RTX 3090, ACEMD achieves the speed of ~1.3 µs/day.


This appears to be an explanation of why this project has not upgraded so far. Acellera is the owner and developer of the software, so we had to wait on them, not the GPUGRID team so far. Hopefully the license here will not need to be upgraded ($$$).


AND with more and more Ampere cards showing up on the project (seemingly unaware that they dont work yet) causing many errors and re-sends...

AND with these new ADRIA tasks that run 12hrs on the fastest currently supported GPU (2080ti)...

don't you think it's time to finally support the next generation GPUs? at this point I think that a lot of the 30-series cards floating around on BOINC aren't crunching here just because they don't work yet. I've seen many posts from other people on other BOINC forums expressing interest in GPUGRID if only they could update their applications.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 57010 - Posted: 23 Jun 2021 | 20:14:24 UTC - in response to Message 57008.

I fully agree. What's so hard about a recompile? It doesn't even have to be deep optimization to get started. The only reason I don't see this as a huge issue is that at the inflated price of Ampere GPUs it doesn't make sense to buy them anyway.

MrS
____________
Scanning for our furry friends since Jan 2002

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 57011 - Posted: 23 Jun 2021 | 20:59:32 UTC - in response to Message 57010.

It looks like my RTX 2060 would be useful too, though I have never paid much attention to the Tensor cores.
https://towardsdatascience.com/rtx-2060-vs-gtx-1080ti-in-deep-learning-gpu-benchmarks-cheapest-rtx-vs-most-expensive-gtx-card-cd47cd9931d2

It would probably be more effective here than on Folding. I can use my GTX 1070's there.

Ian&Steve C.
Avatar
Send message
Joined: 21 Feb 20
Posts: 1031
Credit: 35,627,807,483
RAC: 74,125,343
Level
Trp
Scientific publications
wat
Message 57012 - Posted: 23 Jun 2021 | 21:03:52 UTC - in response to Message 57011.

It looks like my RTX 2060 would be useful too, though I have never paid much attention to the Tensor cores.
https://towardsdatascience.com/rtx-2060-vs-gtx-1080ti-in-deep-learning-gpu-benchmarks-cheapest-rtx-vs-most-expensive-gtx-card-cd47cd9931d2

It would probably be more effective here than on Folding. I can use my GTX 1070's there.

your RTX 2060 can be used here now. Turing is supported. I'm talking about Ampere, the 30-series cards.
____________

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 57013 - Posted: 23 Jun 2021 | 21:30:24 UTC - in response to Message 57012.
Last modified: 23 Jun 2021 | 22:21:46 UTC

There is an advantage in not being on the bleeding edge. As I recall, it is not the first time it has happened here.
It just gets more expensive.

Also, while we are at it, I wonder how much work will be available? The AI technique has higher output, and may allow more of it to be done in-house.

Post to thread

Message boards : Number crunching : It's time for a CUDA 11.1 app to support Ampere

//