Stereo matching – first trials

We now have the webcam images, left and right. Let’s apply the stereo matchnig algorithm.

Beforehand

Before doing anything, we first have to calibrate the webcams.
I tried a little without to see what it really does when there’s no calibration : it works, but the results depend on how well you manually turned the webcams to have the images aligned.
It also greatly depends on how good the webcam lens are, since they deform the image. It’s not something you can clearly see by looking at the images.
It’s only once you run the calibration that you can figure out how distorted your images are and that’s quite astonishing !

So what calibration does ?
It runs some image modifications to map the images to a zone on which both images are perfectly planar and aligned.
It may require cropping a little your images because it gives some distorted borders as it applies a deformation algorithm on each image.
As an example, here is the output from the OpenCV stereo_match sample application.

StereoMatch00

Now, we have the assurance that both images we get from this algorithm are well calibrated and perfectly aligned. It makes a big difference on the resulting disparity map.

My implementation idea

As I explained, I need my webcams to be calibrated.
I spent some time searching for how I can do it and the easiest way I could find is using the stereo_calib sample from OpenCV framework.
Unfortunately, EmguCV does not give samples we could play with to calibrate the webcams.
OpenCV has a lot of samples including two apps called stereo_calib and stereo_match which are perfect for what we’d like to do !

The stereo_calib program uses many couples of stereo images of a chessboard to calculate a few matrices that are stored on two files named intrinsics.yml and extrinsics.yml.These files are then used by the final application to calculate the image deformation and then obtain calibrated images.
This part is shown on the stereo_match sample from OpenCV.

On the implementation part, I think it’s better to stay as close of the OpenCV environment as possible.
The idea is to use stereo_calib sample application to calibrate the webcams and generate the YML files.
Then, on my C# program, load these files and compute the image deformations by porting some parts of the stereo_match sample application using EmguCV.
Of course, we could do the same for the stereo_calib sample, port it to C# instead of directly using it from OpenCV… I found this sample quite complete and it does exactly what I required so, I chose not to waste my time on it.

Installing OpenCV

So, as I said before, we have to install OpenCV to access its sample apps.
Since I found it not that easy to do without prior knowledge, I’m about to explain the main steps I went through. If it helps…

You first have to download the latest OpenCV package : http://opencv.org/

Once you got it, install it.
If you don’t have it, download and install CMake : http://www.cmake.org/cmake/resources/software.html

On Windows (probably the same for other OS), open a terminal window, navigate to where you installed OpenCV, navigate to the samples folder and then run Cmake on it. It should build the samples. Be aware that it may take some time to compile…

Once it’s built, you may now have on the Build folder, the OpenCV samples as executables.

Running the calibration

With OpenCV, there’s stereo images called right–.jpg and left–.jpg with — going from 01 to 13. There’s also a file called stereo_calib.xml which contains the image list of these files.
So you can copy these files and paste it to the same directory as the stereo_calib executables. You can now run stereo_calib and see what it gives !

Final result

Ok, it’s working now ! I ran the sample app with my own stereo images and got the extrinsics and intrinsics files which I then loaded into my program and it gives this :

DisparityMap

That’s pretty encouraging ! We can clearly see the shapes of lamp and distinguish the helicopter !

That’s quite good, but I would like to go a bit further in the subject. I noticed that the cameras don’t have the same color rendering, as you can see from the screenshot. Sometimes it’s even worse, depending on how one camera chose to adapt to the surroundings (ambient light, focus and so on). I saw that it can significantly affect the stereo matching results.

So what I’ll try to do is lock some parameters of the camera starting by exposure which keeps on changing the brightness of the image.
I’m also about to try some color calibration algorithms.
I’ll then be able to see what it does on the quality of the stereo matching results.

Publicités

6 réflexions au sujet de « Stereo matching – first trials »

  1. Ping : Stereo vision color calibration | Artificial Intelligence Addict

Laisser un commentaire

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:

Logo WordPress.com

Vous commentez à l'aide de votre compte WordPress.com. Déconnexion / Changer )

Image Twitter

Vous commentez à l'aide de votre compte Twitter. Déconnexion / Changer )

Photo Facebook

Vous commentez à l'aide de votre compte Facebook. Déconnexion / Changer )

Photo Google+

Vous commentez à l'aide de votre compte Google+. Déconnexion / Changer )

Connexion à %s