Stereo vision color calibration

    Last time, I noticed that my webcams didn’t have the same color response. One is white clear, the other is red tinted…
This difference in color is not desirable when trying to apply stereo vision algorithms such as disparity mapping. It requires two images with the same aspect to allow matching algorithm to work best.

    I read an article from Afshin Sepehri called Color Calibration of Stereo Camera that describes in details a technique to deal with this problem.

About the article

    The article details a technique that allows correcting the stereo image couple by applying a mathematical function on each pixel to change their color to a calibrated one, which is the true color.

    He uses random test patterns composed of colored squares which are then printed to be viewed by the webcams to calibrate.

Afshin’s example test pattern

    Then, he has the true color, left viewed color and right viewed color. He applies a minimization algorithm to find the mathematical function to apply on the left and right image pixels to calibrate each image to obtain true and identical colors.

For the minimization process he has to approaches :

    1. Assuming that each true color component only depends on the same component : Rc = f(Rf) , Gc = f(Gf) , Bc = f(Bf)
    2. Assuming that each true color component depend on all the other components : Rc = f 1(Rf, Gf, Bf)   ,   Gc = f2 (Rf, Gf, Bf)   ,   Bc = f3 (Rf, Gf, Bf)

    After testing both, he concludes that the best one is the second one. He shows his results and it seems to be quite interesting !

    I was quite furstrated by the fact that the article does not show the outcome of this technique on the disparity map output to see if it’s really relevant or not… That’s why I decided to give this a try and implement my own color calibration algorithm based on Afshin’s article.

My implementation idea

    My approach is mainly the same than Afshin’s except that I decided to build a relative correcting algorithm whereas Afshin’s algorithm is an absolut one.
He tries to obtain the true colors on his final images which has some issues :

    • After printing the test patterns, there may be some color errors on the colors due to printer color calibration itself
    • Scene lighting and light reflections may change the webcam perception of the color and introduce errors
    • There’s two models, one for each image, so it may require high computing power to run it online in real time

My idea is based on these points :

    • One of the two webcam image is considered the reference, the calibration algorithm will have to correct the other image to be as close as possible from the reference
    • The algorithm does not give the true colors but we have both image colors calibrated

Implementation steps

On the programming part, I had the following points to code :

    • Random test chessboard pattern generator : generate one or more random test patterns as image files to be printed
    • Test pattern finder : algorithm to find and extract the chessboard from an image
    • Color pattern extraction : locate the colored squares and make an average of the pixels it contains. It gives a list of the colors of each square.
    • Minimization process : algorithm to build the model by finding a function to transform the old (uncalibrated) pixel colors to new calibrated pixel colors. It gives a transformation matrix that then can be saved to a file and loaded on another program, same as we do with extrinsics and intrinsics (see previous article)
    • Image transformation algorithm : point an image on the input and take the calibrated image on the output

Final result

Once all these point programmed, it gives a quite interesting result ! Here is a quick preview :

StereoColorCalibrationScreenshot

    On the top left, you can see the uncalibrated raw images from the webcams, on the top right, it’s the chessboards extracted from the stereo images.
On the bottom left, you can see the calibrated images output.

    On this sample application, I calibrated the right image using the left image as a reference. So, you can see that the right image before calibration is a little reddish and once it has been processed, it’s clearer and corresponds really well to the one on the left !
Mission cleared !

    Oh, I must mention though, that it runs quite slowly on my computer configuration as all this is running on a virtual machine. The VM runs on my 2007 Dell Inspiron 1520 laptop which begins to lack some processing capacity…
I’m currently considering buying a desktop computer with some « hardcore gamer » features… The problem is that it’s quite expensive… Hard choice !

Publicités