Several projects are using cameras and computing to allow visually impaired people to navigate in 3D spaces by converting image to sound. This project tries to go a step further and address a creative disability: blind people are not only unable to see things, but they cannot express things in a visual way either. Sighted people are constantly sharing their experiences through photos and videos.
A low cost stereo web cam can be used to send two slightly different video streams to a computer. Usually, this device would require red and cyan glasses to see images in 3D, but the process that our brain goes through to reconstruct space can be performed by a computer as well. Just like with our eyes, objects seen by each camera will change their horizontal position more as they get closer to us. This change in horizontal position can be measured to create a 3D map of the area we are in front of. Three variables provide useful information: width, height and —especially — depth.
All these data can be converted to sound: depth can be expressed as audio treble (closer has higher pitch), height becomes time (image is read from top to bottom) and width determines the panning of each sound wave (left-right position in a stereo signal).
To allow for a creative use, the camera is located on the hand. As opposed to similar projects that placed the camera on the head or the chest, we want to allow and promote any shot angle and direction. The user can trigger the video and photo capabilities on the go by pressing the buttons of a remote controller. This also produces audio feedback to confirm the action, through the headphones. The laptop/tablet that carries the process through is located in a backpack.
Once the videos and pictures are taken, the aim of this project is to share them on YouTube, Instagram and similar platforms. Their sharing and social conventions represent the reasons behind this whole idea. There are, therefore, two target audiences: an active one composed of visually impaired users (performers) and a passive one composed of sighted online communities.
You can experiment with this project by downloading the source code.
At the moment, this probably works with Windows only because of the Minoru 3D Webcam drivers (that can certainly be fixed within the code). You will also need MJPEG video drivers for video recording.
For otpimal results, the camera must be calibrated externally (setting everything to manual, adjusting exposure and white balance…). Minoru is quite buggy, so you will face some funny behaviours.
For photo and video controls, you can either use the buttons of a mouse or a standard slideshow remote controller plugged to the laptop.
I’m not a professional developer. If you are, you will find that the code can be better organised and highly improved. Your help is much welcome.