My newest version of canonical path augmentation code for generating graphs has reached a new high point - generating 11,716,571 graphs on ten vertices. Of course, it also gets the number of nines (261,080) and the number of eights (11,117) correct as well ... which is great, but I'm cautious about declaring it 'correct'. Especially given the last version did not get the sevens and eights right. See, for example these past failures:
So how does it get the right answer? Well, it now properly uses the method mentioned in this post to only pick canonical deletions that are not cut-vertices. That turns out only to be necessary for graphs on 8 vertices, but you still have to check this for all augmentations, which seems expensive. However, there was a more fundamental problem; consider the example below (basically nicked from Derick Stolee's blog post):
Obviously A and B are isomorphic, yet how do we properly distinguish them? Well, the key is the set of vertices added to - on the image, these are the labels on the edges between graphs : {0}, {1, 3}, etc. When a new graph is created, a vertex is chosen - using canonical labelling, in my case - and the vertices attached to it must be the ones we used to make that augmented graph. I was checking the set of augmented vertices in the automorphism group of the parent, not the child.
So, the canonical checking is now better. I seem to have written a thousand of these methods, but this one (I think!) finally does it right. What I was getting wrong was checking the orbit of the canonical deletion vertex, and not the orbit of the set of vertices it was being connected to. Great! Now what? How long does it take? See this, where the purple line is the new code, and the others are older attempts:
Clearly the problem now is that of verifying the results - it's quite slow to generate these large datasets, and storing them (uncompressed) takes a lot of space. The nines took minutes and megabytes of space, while the tens took hours and over a gigabyte. At this rate, the 11s would take days and 10s of gigabytes. In any case - where do you stop?
So how does it get the right answer? Well, it now properly uses the method mentioned in this post to only pick canonical deletions that are not cut-vertices. That turns out only to be necessary for graphs on 8 vertices, but you still have to check this for all augmentations, which seems expensive. However, there was a more fundamental problem; consider the example below (basically nicked from Derick Stolee's blog post):
Obviously A and B are isomorphic, yet how do we properly distinguish them? Well, the key is the set of vertices added to - on the image, these are the labels on the edges between graphs : {0}, {1, 3}, etc. When a new graph is created, a vertex is chosen - using canonical labelling, in my case - and the vertices attached to it must be the ones we used to make that augmented graph. I was checking the set of augmented vertices in the automorphism group of the parent, not the child.
So, the canonical checking is now better. I seem to have written a thousand of these methods, but this one (I think!) finally does it right. What I was getting wrong was checking the orbit of the canonical deletion vertex, and not the orbit of the set of vertices it was being connected to. Great! Now what? How long does it take? See this, where the purple line is the new code, and the others are older attempts:
Clearly the problem now is that of verifying the results - it's quite slow to generate these large datasets, and storing them (uncompressed) takes a lot of space. The nines took minutes and megabytes of space, while the tens took hours and over a gigabyte. At this rate, the 11s would take days and 10s of gigabytes. In any case - where do you stop?
Comments
I tried to download the code but the old URL gives the good ole 404.
https://github.com/gilleain/generate/blob/master/src/test/scheme3/TimingTests.java
Also for those people interested, but not able to install all JAVA dependencies,
a compiled JAR with *.sh *.bat *.exe would be good (that would be me and others :-).
For these benchmarks it would be also useful to have the machine, CPU
and disk/ramdisk specifications.
Good to read some new stuff (heavy github contributions in April/May 2015)
Cheers
Tobias