Let's say I have an image that is 3000 px wide. I know (at least I think I do) that if I downsize it to be 1500 px wide (that is, 50%), the result will be better than if I resize it to be 1499 or 1501 px wide.
I suppose that will be so regardless of the algorithm used. But I have no solid proof, and the reason I'd like to have proof is that it could help me decide less obvious cases.
For instance, reducing it to 1000 px (one third) will also presumably work ok. But what about 3/4? Is it better than 1/2? It certainly can hold more detail, but will part of it not become irretrievably fuzzy? Is there a metric for the 'incurred fuzziness' which can be offset against the actual resolution?
For instance, I suppose such a metric would clearly show 3000 -> 1501 to be worse than 3000 -> 1500, more than is gained by 1501 > 1500.
Intuitively, 1/n resizes where n is a factor of the original size would yield the best results, followed by n/m where the numbers were the lowest possible. Where the original size (both X and Y) were not a multiple of the denominator, the results are expected to be poorer, tho I have no proof of that.
These issues must have been studied by someone. People have devised all sorts of complex algorithms, they must take this somehow in consideration. But I don't even know here to ask these questions. I ask them here because I've seen related ones with good answers. Thanks for your attention and please excuse the contrived presentation.
The algorithm is key. Here's a list of common ones, from lowest quality to highest. As you get higher in quality, the exact ratio of input size to output size makes less of a difference. By the end of the list you shouldn't be able to tell the difference between resizing to 1499 or 1500.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With