I'm working on a color quantization algorithm.
This is the basic process:
What I mean by "worst set" is the set where the accumulated distance between each vector and the mean vector is the bigger.
And this is how I "split a set":
This works, basically, but image palette looks odd, like picked out of a linear gradient...
Is my algorithm plain wrong ? Can someone help ?
The problem is that your algorithm depends a LOT on the initial sets. Because you only split sets, if two points that are very close to each other happen to be in different sets at the beginning they will always be in different sets. This is not good.
So yes - this is worse than the k-means algorithm.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With