To sum up the previous post flood; generation of constitutional isomers from the elemental formula can be done by generating all partitions of the total 'free' valence of the heavy atoms. The overall scheme is shown here:
(click for bigger, as usual). So, for each formula, multiple partitions can be made, and each of these makes multiple sub-partitions, and each of these correspond to one or more molecules.
(click for bigger, as usual). So, for each formula, multiple partitions can be made, and each of these makes multiple sub-partitions, and each of these correspond to one or more molecules.
Now, I won't pretend that any of this is particularly novel. I am no doubt re-expressing the problem of generating all possible molecules in a slightly different way. Having tried (and failed) to implement published methods, this was the best I could come up with.
I suspect that there are many improvements that could be made to the algorithm, and the implementation of it. Getting something that works, even in a limited way, seems like progress, however :)
Comments
looking at the figures of the partitions it becomes clear that the deterministic generation of all possible isomers is an embarrassingly parallel problem.
Each partition can be handled as single problem and that means if you have 10000 partitions and 10000 CPUs (CUDA TESLA, SiCortex Supercomputer)
you could dedicate each problem to one CPU core or thread.
The problem with the old monolithic CDK deterministic isomer generator code was, that the (FORTRAN style) code can be easily parallelized, but the canonizer was extremely slow. So even having n-CPUs at hand would
not solve the speed problem.
But I think for the molecular space below 500 Da the fully parallelized version could solve most of the problems in a sufficient time frame (if above problem would be fixed and n-CPUs would be available).
Cheers
Tobias
fiehnlab.ucdavis.edu
You are right, it does look like it can be easily run in parallel.
One important thing, though, is that the number of partitions grows much more slowly than the number of structures - for the CnH2n series, the number of partitions is (42, 627, 5604) for C=(10, 20, 30). There are a LOT more C30H60 structures than 5604...
So, it might be that the natural 'unit' would be smaller - but the problem at the moment is that it is still checking within the set of children of each partition for isomorphism.
Still, it is a good idea.
gilleain
"number of partitions is (42, 627, 5604) for C=(10, 20, 30). There are a LOT more C30H60 structures than 5604..."
....well you are right, lets say below 200-300 Da. There we go, the isomorphism tester is still the bottleneck, so a fast isomorphism tester version is still needed.
If you take a CUDA TESLA C1060 with 240 GPU like streaming processors and 80 GFlop/s double precision fp (or 1000 single precision floating point precision) it should be still faster than an 8 core (16 thread) Intel Core I7 which has around 40 GFLOP/s (double precision) and 80 GFLOPs (SP). The CUDA bottleneck can be the transfer from the CPU to the GPU.
In conclusion a massively parallelized code version distributing each partition to each core, using an ultrafast isomorphism tester, together with a versatile good-list and bad-list handler, bundled with with a proper NMR and MS and IR handler would be the way to go :-)
If I go to Wolfram Alpha and ask for the number of all isomers in the universe it still tells me: 42
Cheers
Tobias