how does a computer convert a colored or a black and white image to binary? conversely,how is the binary data converted to an image ? I mean to ask how does it actually process image data ?
There are many variables involved in your question. Bit depth, Colour space, Compression and storage formats.
But the basic idea goes a bit like this.
An image is comprised of pixels. a 100x100 size image has 10,000 pixels. The raw data needs to represent the colour value of each one of those pixels. This is where bit depth comes into play. If you only had 1 bit per pixel, you would only be able to represent two colours - for example, perhaps black and white. this wouldn't make for a particularly interesting image. You could trick the eye into seeing shading by using dithering. Early computers used 4-bit colour providing a maximum of 16 colours, this was a start, but 8-bit colour proved much more interesting allowing 256 colours. As computer hardware improved, the number of bits per pixel increased - 16-bit (65,536 colours) and then eventually 24-bit (16million colours).
Next comes the colour space. This describes how the colour information is stored within those bits. For example - many systems use 32bits to not only store the red,green and blue components of the colour but also an alpha level (transparency). An basic example would be ARGB colour space/model - the first 8 bits are alpha, then red, green and blue (8 bits each).
Once you have this data - you need to choose how you store it. To not lose any data at all, You could store it in its rawest format - ie: every single bit. This usually results in large files however. For example: at 32-bits a 100x100 pixel image would take up 40K - not too bad ... but what about a single frame of high definition footage? 1920x1080 pixels x 32-bits? a little under 8MB. At that size 1 Second of footage would take up around 200MB - a whole movie over 1TB. This is where compression comes in.
Compression is a very big topic - but lets look at a really simple example. What if we tried reducing the number of bits we had to store? We could convert lots of really similar colours into the same colour - this is something that used to be done a lot on the web. People stored images in 8bit GIF format. With the reduced bit depth they took up much less space, but depending how complex your image was, it could often result in a less than ideal visual impact, gradients would end up with striping - or photos would look pixelated.
Hopefully this gives you a little insight into how images are stored. Checkout wikipedia - there is lots more indepth information on all the topics I touched on here. Here's a good place to start: http://en.wikipedia.org/wiki/Color_depth
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With