Author |
Message |
roundup Send message
Joined: 11 May 10 Posts: 57 Credit: 1,735,920,193 RAC: 12,774,635 Level
Scientific publications
|
I have received 4 'Quantum chemistry calculations on GPU' WU, 3 of them calculated successfully on 2 linux machines with 4080 and 4070ti.
Example here:
https://www.gpugrid.net/result.php?resultid=33727490
150 credits per successful WU? Seems odd. |
|
|
Keith Myers Send message
Joined: 13 Dec 17 Posts: 1289 Credit: 5,233,081,959 RAC: 10,574,390 Level
Scientific publications
|
I've had 3 failures and 3 successes. Seems to be test tasks. Interesting bit is that they seem to be employing Nvidia Tensor core calculation paths.
Very little written into the result.txt file and they use very little of the card's resources.
Let's hope that these precursor test tasks are an indication of more substantive QC tasks. Similar to what we see on the ATMbeta app. |
|
|
SteveVolunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message
Joined: 21 Dec 23 Posts: 32 Credit: 0 RAC: 0 Level
Scientific publications
|
Hello,
Steve here from the computational science lab.
This is indeed a new test app (just for linux at the moment). You may be getting some test jobs for this app if you have selected the run test applications option.
There will be in depth post about the new app soon and then once we have it running properly some substantial work!
|
|
|
|
Hello,
Steve here from the computational science lab.
This is indeed a new test app (just for linux at the moment). You may be getting some test jobs for this app if you have selected the run test applications option.
There will be in depth post about the new app soon and then once we have it running properly some substantial work!
Thanks Steve.
is it intended that these tasks do not use the GPU right now? most are reporting that they run a process on the GPU, but no GPU utilization and no significant power draw over idle.
will they use tensor cores on RTX cards?
if so, are they necessary? what about older GTX cards?
____________
|
|
|
Keith Myers Send message
Joined: 13 Dec 17 Posts: 1289 Credit: 5,233,081,959 RAC: 10,574,390 Level
Scientific publications
|
I did now catch the card using VRAM and power resources if I just ignore what nvidia-smi is telling me which gpu has the job task on it. Nvidia-smi is getting confused with these QC test tasks but reports properly for the ATMbeta python tasks.
About 7.6 GB of VRAM usage on the gpu and brief bursts of full power usage of the gpu. Up to 76GB of virtual memory used for the python process on the cpu.
Thanks for the QC task news Steve. Looking forward to the real work to come. |
|
|
SteveVolunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message
Joined: 21 Dec 23 Posts: 32 Credit: 0 RAC: 0 Level
Scientific publications
|
Thanks for the feedback and thanks all for running the tests!
Some of the older test runs will have been using CPU and no GPU. This was for debugging purposes.
All of the most recent tests will have been using the GPU. There should be high utilisation, high memory use, and high power draw. Running test locally on a RTX3090 the test work unit takes 6minutes and is at 100% gpu, 10GB max memory and draws max power. Here are the charts:
https://i.imgur.com/5ceCoW1.png
This test workunit represents calculating the forces and energy of a single small molecule with Quantum Chemistry methods. In future a work unit will comprise multiple of these calculations.
This application will only work on GPUs with compute capability 6.0 or newer. So it works fine on 10XX cards. I can see a few failures on older cards.
The code uses the NVIDIA linear algebra libraries so on newer cards you may see more utilisation. (Nvidia spend more time optimising these libraries for the most recent generation cards).
The other error I am now seeing, and you may see, is a CUDA out of memory error. This does not seem to be due to the maximum memory of the GPU. I can run the test successfully on a GTX1080 with 8GB and I am seeing successful results from hosts with GPUs with less memory. I believe this error occurs when the workunit is sharing the GPU with another process. |
|
|
|
(You can't use https for images at this project - the web software is too old) |
|
|
|
Saw a big batch of work units yesterday for this project and all of them were successful on our end. For a relatively early beta, that's fantastic. What are others seeing with that big batch?
No issues running, ran at 2x on our systems without issues. Are there no checkpoints for these work units? I did notice that when I had to exit BOINC a few times to make some changes to the app config. |
|
|
|
I think no checkpoints right now. but they at least restart from the beginning without an error.
VRAM utilization was greatly reduced from the previous batch (which I like since it lets you more easily run more than one at a time). runtime was roughly twice as long as the previous batch as the admin indicated that the tasks were twice as large (100 molecules instead of 50)
the application makes use of FP64 hardware and high performing FP64 Nvidia cards show great benefit. Titan V, V100, P100 will perform the best here. the FP64 performance of most other GeForce/Quadro cards are much lower. My Titan V runtimes look to be about 3x faster than your 4090 for example. power draw fluctuated between 80-150W per card, probably about 120W average per card.
I processed about 1500 of the tasks from yesterday on my 16x Titan Vs, would have been more but I was having a lot of issues getting enough work due to the task download limits, the rate i was completing them per host, and the limits in how often a single IP is allowed to make requests at this project.
____________
|
|
|
|
Makes sense. Were you running them 1x or 2x?
1,500 tasks is incredible- with no invalids? |
|
|
|
i was running them mostly at 3x. one system I was running them at 4x actually to see if the VRAM was sufficient. it was fastest overall at 4x with ~2000s runtimes. so the fastest tasks were completing in about 8.5 minutes effective.
no invalids.
____________
|
|
|
SteveVolunteer moderator Project administrator Project developer Project tester Volunteer developer Volunteer tester Project scientist Send message
Joined: 21 Dec 23 Posts: 32 Credit: 0 RAC: 0 Level
Scientific publications
|
Wow that is impressive!
This app is one of the cases where the professional grade nvidia cards with their double precision performance really shine.
|
|
|
DragoSend message
Joined: 3 May 20 Posts: 12 Credit: 343,606,560 RAC: 530,097 Level
Scientific publications
|
I would really appreciate it if the task requirements and properties such as VRAM requirement and no chechkpointing, only Linux or Windows, etc would be highlighted in the preference section right were you mark the sub projects that you would like to support. That would save us volunteers a lot of time instead of finding out eventually that your GPU isn't capable of handling them or by digging through pages and pages of forum entries. |
|
|
Keith Myers Send message
Joined: 13 Dec 17 Posts: 1289 Credit: 5,233,081,959 RAC: 10,574,390 Level
Scientific publications
|
You should PM Gianni or Toni and point them at your post request. Steve, the developer for the science app discussed here has nothing to do with the project web pages.
But if you brought this request nicety to Gianni, he may decide to add this additional project and subproject requirements to the subproject selection page in Project Preferences. |
|
|
DragoSend message
Joined: 3 May 20 Posts: 12 Credit: 343,606,560 RAC: 530,097 Level
Scientific publications
|
Ok Keith. Thanks for the info. I will ask nicely. :-) |
|
|
Erich56Send message
Joined: 1 Jan 15 Posts: 1091 Credit: 6,902,032,676 RAC: 19,565,366 Level
Scientific publications
|
For quite a while now, only QC tasks have been available, with sometimes more than 100.000 unsent tasks, as seen in the project status page.
All other subprojects have obviously been stopped, which excludes all Windows crunchers from participating in GPUGRID :-(
Is this the way GPUGRID will go now ? |
|
|