Last night, I decided to have a go at blob detection after feeling a bit jealous of the CTS guys. My algorithm isn’t particularly efficient and follows the pretty standard workflow for this kind of task though as can be seen in the video above, its reasonably effective and the code for all the processing is simple. I’m hoping to implement this in an FPGA and control a robot for my final year masters project so there might be more to come there!
The method for obtaining the blob position like above is reasonable simple. The image is first converted from stock RGB to a brightness independent colour space e.g. YUV or HSV. I chose YUV because converting RGB to YUV is a simple linear operation consisting of a 3×3 matrix multiply. Once the image has been converted into YUV, the U and V components are used to threshold the image. Colours that lie within a specified YUV range are set to 255 and everything outside of this range is set to 0. For pixels that have a value of 255, a variable is incremented by one. When a pixel is equal to 0, this same variable is decremented (with a minimum of 0). This is essentially integrating the image. A peak detector is used for the row to find the point of maximum amplitude and another peak detector is used to find the row with the maximum peak amplitude. The XY location of the blob can then be determined from these two peak detectors. A seperate variable is also used to determine the mass of the blob and is incremented by 1 for every pixel that is set to 255. The output is then plotted, along with the row increment variable and mass to get the video above!
Obviously this will only work for the largest blob (as long as its not too close to another blob) meaning only one blob can be found at once though if the object in the image frame is something as garish as a pink piece of card, I can’t imagine there being many of these things in a single frame! More improvements are still to be made, predominantly finding variable coloured blobs.