Questions and Answers : Windows : Unexpected difference in reward points
Author | Message |
---|---|
GennadyK (deprecated) Send message Joined: 9 Oct 18 Posts: 3 Credit: 91,792 RAC: 0 |
Hello all, I am using two computers for Rosetta@Home computations: 1. Intel Core i5 6500 Skylake with 4 threads, each processes 1 WU in 8 hours giving around 500 points per WU 2. Intel Core i7 2720 QM with 8 threads, each processed 1 WU in 8 hours giving around 150 points per WU The second computer supports Hyper-Threading, so giving more threads with still 4 cores. I could understand if processing time would be 16 hours, however it remains 8 hours, while reward points are more than 2 times lower. This does not happen while computing for World Community Grid: computer #1 needs 3 h per WU and gets 150 points, while computer #2 needs 8 hours per WU and also getting 150 points. Could anyone explain why the computer #2 is so less efficient for Rosetta@Home? Is Rosetta just gives any CPU 8 hours and then cutts off and what's done is done? Cheers |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Your 2720 seems to have 8 cores and only <4GB of memory. By hyperthreading, you are essentially doubling the demand on the already constrained memory resource on that machine. The way R@h handles runtime is to align to your R@h setting for target runtime. Credits are awarded by the number of models completed. So, the number of models a given task will attempt to complete is reduced (or increased) to match your target runtime. So, you should really assess credit per CPU second (not "wall-clock" time), rather than per work unit. Rosetta Moderator: Mod.Sense |
GennadyK Send message Joined: 14 Oct 06 Posts: 4 Credit: 3,059,534 RAC: 0 |
Thanks for the explanations. For instance, a completed task with the typical reward points (150) for i7 2720: https://boinc.bakerlab.org/result.php?resultid=1037983853 This process generated 45 decoys from 45 attempts Here is a typical for i5 6500 (450 points): https://boinc.bakerlab.org/result.php?resultid=1038032459 This process generated 128 decoys from 128 attempts The last one is for the weakest Android device - typical 25 points: https://boinc.bakerlab.org/result.php?resultid=1037488690 This process generated 9 decoys from 9 attempts Is this information indeed about the attempted models you were talking about? Can i7 2720 run more efficiently (with more points per task) with 4 cores or it does not matter as Rosetta will adjust the WU complexity anyway, therefore the efficiency always remains similar? |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Yes, those "decoys" (or models) is what I was talking about. R@h complexity does not change, just the length of time the task runs (trying to align with your runtime preference). The longer it runs, the more decoys it will complete, and the more credit it will be granted. Any machine that is memory constrained is difficult to predict how it might perform with more memory or fewer active tasks. I would suggest you look at memory faulting rates. If faulting rates are presently very high, then you might actually get more net work completed by running fewer active tasks. In general, hyperthreading does not yield much performance difference for an entirely CPU-bound work load (such as R@h). Rosetta Moderator: Mod.Sense |
Questions and Answers :
Windows :
Unexpected difference in reward points
©2024 University of Washington
https://www.bakerlab.org