Message boards : Number crunching : Newly built 5.4.10 optimized windows boinc client
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
Hoelder1in Send message Joined: 30 Sep 05 Posts: 169 Credit: 3,915,947 RAC: 0 |
Perhaps by taking the average of all the Ralph times and base it around x credits per hour? As an example, if one simulation takes an average of 1.5 hours, and the standard reward is 10 credits per hour, each simulation on Rosetta would grant 15 credits. Regardless of how fast your computer is (or the version of boinc you're running), whenever you return one simulation from that wu type, you get 15 credits. It is not really necessary to talk about completion times and credits per hour on Ralph - we could just leave the current (BOINC benchmark based) credit system on Ralph as it is, then for each WU type, assign the median of the claimed credits per structure on Ralph to the returned Rosetta structures. That way the new system would not have to be calibrated; the total amount of granted credit would by definition be the same as with the current system - so no issues with cross-project or FLOPS/credit calibration. Team betterhumans.com - discuss and celebrate the future - hoelder1in.org |
Keith Akins Send message Joined: 22 Oct 05 Posts: 176 Credit: 71,779 RAC: 0 |
Are there enough RALPH machines to get enough sampling for reasonable averages? |
Ethan Volunteer moderator Send message Joined: 22 Aug 05 Posts: 286 Credit: 9,304,700 RAC: 0 |
Hoelder1in, that's a good suggestion. . defiantly easier to program. The first thing I think of though is doesn't that just move the 'optimized' client into Ralph? While it wouldn't have as direct an impact as it does on Rosetta, the median would still be shifted by the various clients. Perhaps time is a better measure than credit claimed? Credit calculation is time * cpu speed (for the most part). If all Ralph measured was time, it would take all the other variables out. . overclocking, clients, amd vs intel. . all of it would be averaged out to get a simulation to credit constant. Thoughts? |
Keith Akins Send message Joined: 22 Oct 05 Posts: 176 Credit: 71,779 RAC: 0 |
Even if the optimised client were shifted to RALPH, the credit granting would still be equalized. Although we have no way of knowing how many optimised clients/systems will go to RALPH after CASP 7 ends or how many are there now. |
Whl. Send message Joined: 29 Dec 05 Posts: 203 Credit: 275,802 RAC: 0 |
"Take my lead. I dont use optimized clients. Just say no like me! Woof!" |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
When I first showed up here, we were doing 1? decoy/model/simulation - and returning them to the server. The server got hammered when everyone was running small proteins and our error logs were filled with connection problems, since the fast systems were done in 5-10 mins, and more machines were trying to download than the server could supply. When the WUs were set to run 10 decoys/models/simulations, the slower machines that took up to 6 to 8 hours to produce a single decoy were having problems finishing their tasks - and if they weren't on 24/7, then some never finished the WU. To help reduce bandwidth and deal with the problems with running a fixed number of decoys per WU, we were given the opportunity to run WUs from 1 hour to 4 days (96 hours). A few problems cropped up with the longer run times - but have since been taken care of. I went from 100 megs a day down to about 6 megs a day. Recent improvements have shrunk the size of the WUs even further. (We're still waiting for compression, I believe.) Any proposed change to the system would have to address the same issues. -------- With an internal Rosetta benchmark used on Ralph, we'll have more consistent results in the credit per model score. If there's more standard client Intel cpus on Ralph than standard client AMD cpus, then the average will be lower; if a particular WU has more than the average number optimized clients taking part, then the credit per model score can go way up. |
Ethan Volunteer moderator Send message Joined: 22 Aug 05 Posts: 286 Credit: 9,304,700 RAC: 0 |
Even if the optimised client were shifted to RALPH, the credit granting would still be equalized. Although we have no way of knowing how many optimised clients/systems will go to RALPH after CASP 7 ends or how many are there now. I understand this, but what is to be gained by using the averaged credit claims of many computers versus the average time claims? The former can be 'manipulated', the latter seems to be more of a constant. Time can also be judged by the scientists. . since the simulation crunch time isn't affected by client version, bogus claims of time are more easily contested than client benchmarks. |
Keith Akins Send message Joined: 22 Oct 05 Posts: 176 Credit: 71,779 RAC: 0 |
Point well taken. Just exploring the options. I guess the WU approach would be harder to code but easier to administer. By the way; stock manager, stock client and stock clocked. Just in case anyone asks. |
Hoelder1in Send message Joined: 30 Sep 05 Posts: 169 Credit: 3,915,947 RAC: 0 |
Hoelder1in, that's a good suggestion. . defiantly easier to program. The first thing I think of though is doesn't that just move the 'optimized' client into Ralph? While it wouldn't have as direct an impact as it does on Rosetta, the median would still be shifted by the various clients. If you determine average completion times on Ralph and set X credits per hour, how would you calibrate that in terms of FLOPS ? You would need to know the average speed of the Ralph participants' computers for that. So, perhaps it would be better to not use Ralph at all and measure the completion times on a local computer (or computers) which known benchmarks, instead ? I guess they could just attach a couple of local Baker Lab (or otherwise trusted) computers to Ralph and use those to determine the median of the claimed credit (to make sure Ralph doesn't get 'hijacked' to drive the credits/structure up). Team betterhumans.com - discuss and celebrate the future - hoelder1in.org |
XS_Vietnam_Soldiers Send message Joined: 11 Jan 06 Posts: 240 Credit: 2,880,653 RAC: 0 |
|
Ethan Volunteer moderator Send message Joined: 22 Aug 05 Posts: 286 Credit: 9,304,700 RAC: 0 |
So, perhaps it would be better to not use Ralph at all and measure the completion times on a local computer (or computers) which known benchmarks, instead ? I guess they could just attach a couple of local Baker Lab (or otherwise trusted) computers to Ralph and use those to determine the median of the claimed credit (to make sure Ralph doesn't get 'hijacked' to drive the credits/structure up). This would be an awesome solution. What would it take for the community to accept it? Would they need one of each type of amd and intel cpu? Would those machines need to test both windows, linux, and OSX performance? If the lab didn't have those resources, would it be unfair for them to suggest everyone who is concerned about credits attach to Ralph part time to 'level out' the field? -E |
Keith Akins Send message Joined: 22 Oct 05 Posts: 176 Credit: 71,779 RAC: 0 |
Guys, I think we're on to something here. |
dumas777 Send message Joined: 19 Nov 05 Posts: 39 Credit: 2,762,081 RAC: 0 |
Goodness this forum grew. I didnt want to start a holy war (though fun to flame to get the client I built out there). I think I will build and post (probably not in forum again though :) off the stable branch everytime a client version is released. Not so much to inflate credits but because I hate running stock i386 binaries. FYI one more shameless plug - Unoffical optimized 5.4.10 client can be downloaded at http://boese.kicks-ass.net:6969. |
Hoelder1in Send message Joined: 30 Sep 05 Posts: 169 Credit: 3,915,947 RAC: 0 |
I guess they could just attach a couple of local Baker Lab (or otherwise trusted) computers to Ralph and use those to determine the median of the claimed credit (to make sure Ralph doesn't get 'hijacked' to drive the credits/structure up). Well, the median is really quite powerful in terms of removing all kinds of differences and extremes. For instance, the less than 10% Linux computers can savely be ignored as they will have no effect on the median. I also suspect that any Intel/AMD differences can be pretty much ignored for the purpose of determining of the median. Team betterhumans.com - discuss and celebrate the future - hoelder1in.org |
XS_Vietnam_Soldiers Send message Joined: 11 Jan 06 Posts: 240 Credit: 2,880,653 RAC: 0 |
Movieman, Hello Ethan: I'm sorry, I think you misunderstood me. I'm not suggesting limiting the timeframe, exactly the opposite. IF a WU needs 100,000 decoys run and say to do so on John's Dell P4-3000 it takes 12 hours to compute this, then that is the time that should be allowed. There shouldn't be the ability by the end user to do only "part" of the work that is needed. The time that it takes to do the work(decoys) is what should be allowed. When the work is done, it's done. Thanks for your time, Movieman |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
The 100,000 decoys we're after per WU is in reference to the P1234_CASP7_LoopdyLoopRabbitHop WU types.. not the individual P1234_CASP7_LoopdyLoopRabbitHop_31415926535 WUs that we download. Keep in mind that your (or one of your) Opteron 170 is only generating about 34.6 decoys/day with one of the latest WU types. The science folks actually want data to play with, and mentioned wanting data back as soon as possible. Having most of the data worked on each day uploaded and ready to work with is a big plus. Our crunching on the data until it's due, and then uploading it all at once would cause trauma to the Rosetta staff. It would be nice to have the machines download a single WU and send back a trickle of data every 3,6,8,12,24 hours as per your settings. (Something that's been brought up numerous times.) But even then, we'd likely be limited to how many days we could run in that mode, before we'd have to download a new WU so other WUs get enough work done on them. Keep in mind that the idea that I posted was the culmination of the discussions we had on the subject prior to CASP, and David seemed to agree with the plan. We've got a week to point out the possible problems, make more suggestions, and offer suggestions to deal with the problems mentioned. And when the programming staff is done dealing with the critical CASP issues, they can weed through our discussions and pull out the best ideas. And then add one more item to the list of improvements that are thanks to the contributors. As for Ethan's question - I'd be fine with the benchmark for credits/model being based on just a small number of the better performing systems. Systems a bit higher performing than my Athlon 64 754pin 3000+ at 2Ghz, for example. And as long as my system still gets about 255 credits a day.. I'll be happy. *snicker* (It'll be interesting to see how performance changes when we switch to a production based system, and have all the Linux users happier - and get to see which cpu architecture is the best at various Rosetta approaches i.e. the ABINITO-Don'tRunIntoTheEnd coding.) |
Morphy375 Send message Joined: 2 Nov 05 Posts: 86 Credit: 1,629,758 RAC: 0 |
It doesn't matter whether I'm a nice teammate or not... And if my english is to bad (again) how do You call it what I call cheating? Legal action because some (many) are able to install a Boinc client others can't because they have no time or skills to do so? What means "allowed by the project leaders"? Are they able to allow or deny by words and force us to do something? No... They want the work done and don't want people to leave if they forbid optimized clients. And to make my point clear: Why should others get more credits for the same work done? The same work! So please proove to me that an optimized client makes the results of Rosetta better or has any other useful implementation for the science and I'll apologize... At TSC it is possible to crunch 2000 WU's in a bunker and then upload the results again and again to push the stats. It must be legal because the project leaders aren't able to do something against it in the short run. How would You call people doing this? And now back to work.... Teddies.... |
carl.h Send message Joined: 28 Dec 05 Posts: 555 Credit: 183,449 RAC: 0 |
I like to use XP and Vista, some like to use a flavour of Linux or 2000. Some use standard Boinc others 5.5 etc., Some OC some don`t, some AMD some Intel, some other. Whilst others use 5.5 so shall I, end of.... The project is changing the way credits are given, I`ve said that enough times. They are intent on putting it correct but does that stop the bickering ? No ! Personally I don`t give a monkey`s, call me a cheat, I don`t care, I haven`t changed, I won`t, until it`s all sorted. You`re all entitled to think what you like of me, I doubt it will alter my life one iota ! Not all Czech`s bounce but I`d like to try with Barbar ;-) Make no mistake This IS the TEDDIES TEAM. |
carl.h Send message Joined: 28 Dec 05 Posts: 555 Credit: 183,449 RAC: 0 |
So please prove to me that an optimized client makes the results of Rosetta better or has any other useful implementation for the science and I'll apologize... One could say prove that credits make the results of Rosetta better or have any useful implementation ! As you well know, the guy`s who originated and still swear by 5.5 , still state that the credit system that exist`s favours one processor over another. From this we can conclude that the existing system isn`t fair. 5.5 was/is an attempt to correct this and in the eyes of a vast number is better than the original. People will alway`s see way`s, in their eyes, to make things better. This often results in breakaway groups such as boxing, wrestling, Rugby, football etc., As a free society we have a choice as long as it`s not illegal to pick which we want. The fact I like Rugby Union does not mean Rugby league (nancy game) is illegal. As for the TSC bit, well you know as well as I it is possible to optimise Boinc to accrue millions of points, it`s not hard but people stay with 5.5. Are these the out and out cheats you say they are ? I think not ! Question : Is AMD cheating to call a 1.8Ghz a 3000+ ? Is the 3000+ as good/fast as a 3GHz Conroe ? Not all Czech`s bounce but I`d like to try with Barbar ;-) Make no mistake This IS the TEDDIES TEAM. |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
So please prove to me that an optimized client makes the results of Rosetta better or has any other useful implementation for the science and I'll apologize... Yes!!!! At last some one has got it!!!! Credits are but the frosting in a cake: beautiful to see but basically worthless. They may serve as motivator for some, as a source for playful competition intra and/or intermurally or at it has become clear an obssesion for some. What credits have also become is the source for back stabbing, slandering others and diverting the attention from the basic work run in the project. That is why work; actual work should be the standard that should be recognized. I repeat again: the problem is not with the oprimized clients , the problem is BOINC: any platform that makes it easy for the egregious tinkering we were witness of ( and I am not talking about the optimized clients for linux, Mac and Windows) is open to tinkering regardless the credit standard used. So as long as we have BOINC, any credit system that is implemented is open to extreme tinkering. So Morphy, will you join me in asking for a BOINC free Rosetta or if not for a closed source BOINC with heavy encryption? Or better still: wil you join me in calling for credit-free DC Projects? Hey , it is the science that matters , not the credits :) |
Message boards :
Number crunching :
Newly built 5.4.10 optimized windows boinc client
©2024 University of Washington
https://www.bakerlab.org