Message boards : Rosetta@home Science : Will future Rosetta work units be on a GPU, like AlphaFold?
Author | Message |
---|---|
Technologov Send message Joined: 17 Jul 07 Posts: 16 Credit: 414,950,063 RAC: 100,077 |
https://www.wired.com/story/without-code-for-deepminds-protein-ai-this-lab-wrote-its-own/ it seems that NN like AlphaFold does the job better than Rosetta@Home. And RosettaFold seems to copy the NN architecture of AlphaFold. Does it mean that future work focus more on GPU contributions? |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,526,853 RAC: 6,993 |
It's a good question. I think it's important to note that alphafold uses Rosetta to fine tune the models it creates, which is CPU based. Or at least that's my understanding. It will be interesting to get some updates from the project. I expect that protein design and more complex things like protein-protein interactions and protein-environment interactions will be where more computer power is required and I wouldn't be surprised if they still need to be CPU based. |
Technologov Send message Joined: 17 Jul 07 Posts: 16 Credit: 414,950,063 RAC: 100,077 |
nonsense. AlphaFold doesn't use Rosetta in any shape or form. You're spreading misinformation. |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,526,853 RAC: 6,993 |
nonsense. AlphaFold doesn't use Rosetta in any shape or form. You're spreading misinformation. Are you sure? It says otherwise here: https://fold.it/portal/node/2008706 Finally, AlphaFold combines their distance predictions with the Rosetta energy function (the same energy function used by Foldit) to refine their final folded structure. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,551,716 RAC: 6,403 |
The next CASP competition in 2022 will be very interesting!! |
Greg_BE Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
The last time this was addressed was in 2016 More computing performance is not a good answer if the limit comes from available memory limits rather than from computing limits. Rosetta@Home has already looked into GPU versions, and found that they would require about 6 GB of graphics memory per GPU to get the expected 10 times as much performance as for the CPU version. This has never changed. Another person in 2016 says: imho GPUs may be simplified as very simplified ALUs with 1000s of 'registers' in which the ALUs can do SIMD (single instruction multiple data) executions on them. typical GPUs possibly have hundreds to thousands of 'gpu' (e.g. cuda) 'cores' on them & they benefit from a specific class of problem, i.e. the whole array or matrix is loaded into the gpu as 'registers' and in which simd instructions runs the algorithm in a highly *vectorized* fashion. this means among various things, the problem needs to be *vectorizable* and *large* and *runs completely in the 'registers' without needing to access memory*, it is useless if we are trying to solve 2x2 matrices over and over again in which the next iteration depends on the previous iteration. the whole of the rest of the gpu is simply *unused* except for a few transistors. In addition, adapting algorithms to gpus is often a significantly *difficult* software task. it isn't as simply as 'compiling' a program to optimise for gpu. Quite often the algorithms at hand *cannot make use of GPU vectorized* infrastructure, this requires at times a *complete redoing* of the entire design and even completely different algorithms and approaches. while i'd not want to discourage users who have invested in GPUs, the above are true software challenges to really 'make it work'. As i personally did not use s/w that particular use the above aspects of gpu, i've actually refrained from getting one and basically made do with a rather recent intel i7 cpu. i would think that similar challenges would confront the rosetta research team and i tend to agree that functional needs are the higher priority vs trying to redo all the algorithms just to make them use gpus. as the functional needs in themselves could be complex and spending overwhelming efforts into doing 'gpu' algorithms could compromise the original research objectives But even LHC (ATLAS) only uses 4 cores per task and no GPU and WCG uses only CPU. That's just the way their science works. RAH has something that works for their science and their group of programmers, so why change it? And this https://boinc.bakerlab.org/rosetta/forum_thread.php?id=14328 might be of interest to you. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,551,716 RAC: 6,403 |
The last time this was addressed was in 2016 2016....2021 RoseTTAFold!! |
Message boards :
Rosetta@home Science :
Will future Rosetta work units be on a GPU, like AlphaFold?
©2024 University of Washington
https://www.bakerlab.org