Message boards : Number crunching : difficult target first
Author | Message |
---|---|
NewInCasp Send message Joined: 12 May 06 Posts: 21 Credit: 5,229 RAC: 0 |
Hi, Is it possibe to que difficult target first in R@H. Yesterday I was checking few new targets in casp7 and found some of them are easiest! |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
Hi, Is it possibe to que difficult target first in R@H. Yesterday I was checking few new targets in casp7 and found some of them are easiest! The short answer is no. The system will work on them based on their deadlines. There really are no easy or hard ones. There are longer and shorter ones, but the difficulty for the computer is the same. The credits per hour is the same for all work units. That said there is a small advantage in working on the larger ones depending on how you measure you credit scores. If you work on a lot of large proteins, it will take longer to make a single model. So the system will report these back less often, but claim a higher credit because they took more time to produce. This will raise your RAC scores. It will not affect your total credits. Some people like this little extra boost in their RAC. I am advised that in the next version of the Application, the servers will begin a new distribution policy. The very largest of the work units will only be sent to systems with more than 1GB of installed memory. This is to help people with less memory. Moderator9 ROSETTA@home FAQ Moderator Contact |
rbpeake Send message Joined: 25 Sep 05 Posts: 168 Credit: 247,828 RAC: 0 |
I am advised that in the next version of the Application, the servers will begin a new distribution policy. The very largest of the work units will only be sent to systems with more than 1GB of installed memory. This is to help people with less memory. [/color][/b] I have a machine with 958MB of memory, and I do not use it for other difficult tasks, so it would be nice if I could somehow "opt-in" for the large memory jobs, although it falls below the 1GB minimum. Regards, Bob P. |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
I am advised that in the next version of the Application, the servers will begin a new distribution policy. The very largest of the work units will only be sent to systems with more than 1GB of installed memory. This is to help people with less memory. [/color][/b] I don't know how tight they will be on this. They really should not use the BOINC memory figures because they are not correct. I also have some machines that are running without any problem and they fall below the minimum. I want to convince then to make this a user selectable parameter. There will be an announcement in the announcements thread for what ever they do. Moderator9 ROSETTA@home FAQ Moderator Contact |
senatoralex85 Send message Joined: 27 Sep 05 Posts: 66 Credit: 169,644 RAC: 0 |
I don't know how tight they will be on this. They really should not use the BOINC memory figures because they are not correct. I also have some machines that are running without any problem and they fall below the minimum. I want to convince then to make this a user selectable parameter. There will be an announcement in the announcements thread for what ever they do.[/color][/b][/quote] I would also like to see this be available based on user preferance! Although my maching does not have 1GB of memory, it does have 800mHz Rambus RD Ram that came standard with Gateway at the time. It is pretty robust! |
rbpeake Send message Joined: 25 Sep 05 Posts: 168 Credit: 247,828 RAC: 0 |
..I want to convince then to make this a user selectable parameter. There will be an announcement in the announcements thread for what ever they do. Another factor to consider, I do not use the graphics because I run BOINC as a service, thus saving additional memory usage. So an arbitrary cut-off of 1GB of memory does not seem to make sense because there are other factors to consider. And from what I read on the Ralph boards, not everyone with less than 1GB was having issues with the larger workunits (and most of those with issues were running the graphics also). Regards, Bob P. |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
My single core, 1Gig Ram 2Ghz cpu hasn't had any problems with Ralph other than the constant scroll of out of work messages.. and a string of 5 WUs that were missing (fasta?) files. You're welcome to use it for the large proteins; and it's also a service install so there's no graphics memory usage. |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
...not everyone with less than 1GB was having issues with the larger workunits (and most of those with issues were running the graphics also). The point for any project like this is to make the experience as problem-free as possible for everyone involved. So they aren't looking for the level at which some systems will work... instead they must find the level at which ALL systems will work. There will still be work for <1GB systems to crunch on. They are trying to make the best use of the available resources, both via our PCs, and their developers and support volunteers. Who knows better what they are running now, and what they plan to run in the future than the project team? And so who better to establish the guidelines? I agree, they have taken the choice away from you, and it would be better if they allowed you to choose, and default things to the conservative decision for those that don't want to HAVE to choose. But it is significantly more difficult for them to allow you to opt-in, and add preferences and resolve them every time you connect, and to test all of that process. Bottom line is that they've found "too many" systems have problems with specific WUs and so they are taking steps so that those systems will no longer have such a problem. One step at a time. Maybe the memory refinements will continue and they'll be able to bring down even those larger WUs and it will become a moot point. But there are many out there that do not meet the present 512MBs the project recommends... and those are often the ones posting with problems. And sometimes swearing about how other less memory intensive projects run just fine and so R@H is broken... yadda yadda... "I'm not going to run R@H anymore". So please try to assure you stay in-line with the goals of the greater project and not just one specific PC environment where you think you can get by with less than the project's recommendations. They aren't saying you are mistaken. They're just saying that too many people are likely to run in to problems, and so they're taking steps to avoid that. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
tralala Send message Joined: 8 Apr 06 Posts: 376 Credit: 581,806 RAC: 0 |
The problem with the opt-in solution is that it is currently not supported by BOINC whereas the automatic distriution based on memory specification of the target host is. I don't know whether they have the time to change the code. P.S.: I would suggest >=1024 MB or >1023 MB since 1 GB should be enough even for the larger ones and machines with 1 GB are commom whereas more than 1 GB is still the exception. But it depends on the test result on Ralph I guess. |
Ethan Volunteer moderator Send message Joined: 22 Aug 05 Posts: 286 Credit: 9,304,700 RAC: 0 |
The problem with the opt-in solution is that it is currently not supported by BOINC whereas the automatic distriution based on memory specification of the target host is. I don't know whether they have the time to change the code. I'd suggest something like 950mb. Many people have integrated video which takes away from the total amount of memory visible to the OS. |
rbpeake Send message Joined: 25 Sep 05 Posts: 168 Credit: 247,828 RAC: 0 |
I'd suggest something like 950mb. Many people have integrated video which takes away from the total amount of memory visible to the OS. That is exactly my situation! Thanks for thinking of it. Regards, Bob P. |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
I'd suggest something like 950mb. Many people have integrated video which takes away from the total amount of memory visible to the OS. There is new information on this issue. I already posted some where on this but I can't find it now so I will repeat it here. The project has decided that there are a number of issues with the very largest of the work Units, that go beyond simple memory size. For one thing there is a memory "leak" occurring when these are run, which they have not solved yet. There are also major differences in the science being performed in those work units. So they have determined that these should be run in a different environment completely. In part this is because of the issues involved in separating the normal work unit science from the large work unit science. They are really quite different. As a result, they will continue to reduce the memory footprint of the work units that are to be run on Rosetta, and the extremely large ones will be run in a different system. This eliminates the need for the change in work unit distribution control. Please don't shoot the messenger. Moderator9 ROSETTA@home FAQ Moderator Contact |
tralala Send message Joined: 8 Apr 06 Posts: 376 Credit: 581,806 RAC: 0 |
Please don't shoot the messenger. ...which is tempting though. ;-) Well the message is clear there are some people out who would really like to help with their big machines for the more demanding WUs. The project team can consider asking for that any time, be it with big WUs here on Rosetta be it on Ralph or even a third BOINC-offspring. If they have another environment which can produce the science its okay as well. Perhaps the BOINC scheduler gets more sophisticated over time to allow better targeting of high-spec-machines with demanding WUs. |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,617,765 RAC: 11,361 |
although it may be irrelevant for this topic now, one thing I'd like to suggest is that if large jobs are being sent out then it might be a good idea to only send one big job per dual core/dual cpu computer if that's possible. One big job and one normal job would be a better proposition than two big jobs for most system's i'd have thought. |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
...only send one big job per dual core/dual cpu computer if that's possible. THAT one's going to be tough. BOINC's rules aren't that complicated I don't think. But if they can establish the needed memory "PER CPU", then hopefully that achieves the same objective. At this point, as Mod9 said, the idea of heavy WUs is on hold. I think their original post on the subject was over on Ralph, which is why they couldn't find it. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
Lee Carre Send message Joined: 6 Oct 05 Posts: 96 Credit: 79,331 RAC: 0 |
...only send one big job per dual core/dual cpu computer if that's possible. from my understanding of the system, you're correct in that it will be hard if not impossible to make such a specification i don't even think they can specify RAM per CPU, only RAM per WU, this is yet another area in which BOINC is lacking :( Want to search the BOINC Wiki, BOINCstats, or various BOINC forums from within firefox? Try the BOINC related Firefox Search Plugins |
Message boards :
Number crunching :
difficult target first
©2024 University of Washington
https://www.bakerlab.org