Interesting article on Artificial Intelligence

I had in my daily feeds an interesting article about a project involving Artificial Intelligence applyed to a real life system.

It’s about a helicopter how has to learn by itself to maintain a given height using a neural network trained by an evolutionary technique.

Check it out ! http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2012/sab323_and43/Website_sab323_and43_EvolvingHelicopter/website_evolvinghelicopter.html

I find this article quite interesting and inspiring as it demonstrates that with a quite simple installation and AI algorithms it is still possible to obtain something working and converging to a solution !

In another hand, there’s some stuff I don’t really interstand, like why did they chose to measure the helicopter height with a homemade IR and not with a potentiometer placed at the tip of the boom… I think there may be some good reasons…

Apart from that I really think it’s a good project and experiment !
Thank you Akshay and Sergio for sharing this ! Good work !

Instructions on how to modify the webcams

So the previous point was about adding stereo vision to my AL5C robot.

Now let’s do it ! I wrote an Instructable for a better reading. That’s a more common way for doing such a thing too !

So, follow this link to access the Instructable : http://www.instructables.com/id/Giving-sight-to-your-Lynxmotion-AL5C/

Finally here my AL5C stereo vision enabled :

Modified Lynxmotion AL5C

Feel free to share and comment (whatever if it’s here or on the Instructable !)

Adding Stereo Imaging to the Lynxmotion AL5C

My target now is to add stereo imaging to my robot arm. So to explain this I’ll make parts corresponding on each step, and make posts out of this, to ease comprehension and navigation on the blog…

But hey !! What’s « stereo imaging » ?? It’s an algorithm to infer 3D data from two images of a same scene, but taken at two different know points. It then gives a 2D image in grayscale like this famous one :

Stereo Imaging - Disparity map

You’ve got on top, the left and right image taken from two close points in a room.
On the bottom, the result : on the left, noisy image from the algorithm, on the right, the filtered result. (Image credits : http://afshin.sepehri.info/projects/ImageProcessing/ColorCalibration/color_calibration.htm)

The white color stands for closer objects, the darker it is, the further it is from the camera.
As you can see, it’s a powerful algorithm that allows to have the 3rd dimension, the depth from a scene. We can imagine using the produced data with object recognition algorithms or to help a robot to move itself in space avoiding obstacles, or maybe also in SLAM agorithms (SLAM stands for Simultaneous Localization and Mapping).

So, I started searching for a good camera. It has to be small enough to fit two of these in the robot arm, provide good quality images, have an auto focus feature…  And also, affordable ! That’s a lot of criteria isn’t it ?

After spending some time searching, I finally found the Logitech C525 which has it all !
I also searched on the web some information on what the cameras had inside to see how the PCB and the sensor looked like to have an idea for integrating the cameras on the robot arm (for example locate mounting holes, sensor position…).
There’s one video which helped me a lot to see what’s in the webcam, it’s here : www.youtube.com/watch?v=mBwH2hqGkck‎ thank you Peter !
Oh and, little detail on the webcam, it has an internal microphone, so later I could play with sounds too, in stereo !

Logitech C525

So, the Logitech C525 is an HD camera featuring 720p image resolution, auto focus and it’s quite affordable costing some 40€ each… For full specifications, see the official Logitech website.

Finally I bought two of these webcams. Next post will show you how to hack the webcams to install them on the robot arm ! Interesting part ha ? See that next time !

Interesting article about a new vision algorithm

That’s quite curious as I’m just resuming on my projects with vision stuff that an article about some guys from the MIT just found a new vision algorithm ! Coincidence ? Mmmh ?

So here is the article : http://www.technologyreview.com/aroundmit/526151/orienteering-for-robots/

Here is the French version (I didn’t forget ya !) : http://www.techno-science.net/?onglet=news&news=12669

In my mind, I think it seems to be quite interesting ! It extracts more data from what’s already available today. But well, the point is that it seems to require some laser pointing/measurement system which generally costs a lot of money and is pretty hard to embed on a small system.
Moreover, the article does not mention how long it takes for the algorithm to spit the results out… So we currently don’t know if it is going to be working « realtime » or not…

We have to wait for the presentation in June to have more details probably… Except that, it looks great and it may bring some new stuff in the vision algorithms though !

Time will tell !

Some news and 2014 project updates

It’s been a while I did not publish anything on my blog… I’m sorry for that ! Poor robot arm, it’s now full of dust, waiting for me for ages… 😉
I was quite busy these days, and I was working on another project, more web oriented which has no link with artificial intelligence !

So well, I decided to go back to my AI experimentations. On my last workings I got stuck working with the camera I installed on the Lynxmotion. The webcam I used was seriously old and the image was kind of redish and blurry… It was also really slow and had no auto focus !! So no point to go any further…

I already worked on projects involving the use of cameras and by using some libraries, I discovered how powerful they were. I saw some functions that I would be curious to try, and the one that excited me much was Stereo Imaging ! I already thought about this possibility before, and even tryied to program my own implementation of the algorithm which did not really success even if it was beginning to work in some way, but was quite slow…
There’s some other functions which may be interesting like the POSIT algorithm, SURF and so on… Here are some stuff to investigate !

I’ll make a special post to explain in details the steps I came through on stereo imaging exploration.

Apart from that, I also have some stuff to publish on this blog such as my work on what follows the servo tweaking : getting the position of each joint of the robot. This is a good step in making some experimentations on AI since you then get data to feed the models with.

I also have in mind to buy me a better computer. I’m starting to feel slow with my installation. I dream of a good computer labeled as « gamer computer » with an Intel Core i something and a heavy NVidia graphics card, with 4 to 8Go of RAM… Mmmh I have to think about it and maybe save some money for that since it’s going to be expensive !!

So well, the next post will be about how I chose the webcam and stuff about installing the webcam on the robot arm.