Well I’m in York at the moment on the UKESF workshop thing so I haven’t actually had much time to do any real electronics so its software o’clock!
Whenever I’m not doing much, I quite enjoy dabbling in image processing, its not one of my fortés but its always fun so today, I decided to have a go with a Floyd-Steinberg’esque colour to black and white image dithering script (http://goo.gl/AcS6 – for those who don’t know.). The way I have interpreted it is slightly different to the formal definition so I’m not going to say they’re exactly the same, I wouldn’t really say their compatible! What I can say though is the Harris version can successfully convert a colour picture into a black and white picture where the image “shade” – viewed from afar; is dependent on the local pixel density. My thinking here is that if I want to represent various shades with pixel values of 0 or 1, when viewed from afar, this can be done with the pixel density of a certain area of pixels. For example, in a 3×3 block, if all pixels are 1 (white), then the whole block from afar will look white (duh!), whereas if all pixels are 0 (black), its no surprise that the whole block will look black from afar. The cool stuff happens when you make half the pixels black and half the pixels white, which from afar will look “somewhat” grey!
Edit: The images look better if you click them to view them at the proper unscaled resolution!
I’m sure you’re all looking forward to seeing some results so here you go!
As you can see above, its not brilliant and the first doesn’t have the cool “random” pixel assignment of the other methods though I have implemented a poor pseudo-random version which rotates the matrix after it has been used to give a more “random” feel though many of my defined density matrixes are symmetrical! Regardless, it does work relatively well and you can certainly make out the landscape picture from the black and white picture below. As much as the sky doesn’t translate too well…
So how does it work I (don’t) hear you ask? Its actually relatively simple (isn’t everything…) so I’ll do it in steps to the algorithm:
- Convert the colour image to black and white by summing all pixels and dividing by 3 (range from 0 to 255 for a 24bit bmp in Matlab).
- Define 10 (arbitrary really, more will produce a higher detailed picture theoretically, you can have 2^(3×3), 512!) 3×3 dither matrices of varying “brightness”. I define brightness by the amount of 255’s within the dither matrix. I’ve got 10 levels of brightness varying from [0,0,0;0,0,0;0,0,0] (0 brightness) to [255,255,255;255,255,255;255,255,255] (9 brightness), with values in between such as [255, 0, 255; 0, 255, 0; 255, 0, 255] (5 brightness). Symmetry isn’t really a big deal though I’ve made mine symmetrical because I enjoy things like that for no main reason.
- Create a 2d for loop that searches through the image array. Sum all the pixels in a 3×3 image macroblock and divide by “sensitivity”. Execute a chain of if statements which defines what the dither matrix should replace the 3×3 image macroblock. Really simple to be quite honest! Lets say that B = macroblock brightness, if B<20, 3×3 macroblock = dither matrix 0. If B<40, 3×3 macroblock = dither matrix 1 and so on!
That is literally it. I store things in different arrays so I can display all versions of the image at the end but its a really simple algorithm and with some efficient coding, it could actually be quite fast. Efficient coding within Matlab however isn’t something I particularly partake in as I will generally write things in C if I want them to run fast.
While the pseudo-randomisation might look better on the landscape, when doing it to a face, it looks drastically worse. Fortunately, I’ve made it definable by a parameter within my Matlab script.