Appearance Based Obstacle Detection
The goal of this project is to assist visually impaired people to navigate an environment. I believe that the biggest problem they face is circumventing obstacles during the navigation. Therefore the first goal is to come up with a system that will help with avoiding obstacles.
Ease of use is also a major concern, so as a team we decided to focus on using a mobile phone equipped with a camera for this.
Due to a phone only having a single camera, and therefore not being able to detect depth accurately, my approach was to detect obstacles based on appearance.
When searching through Google scholar, this was one of the most cited papers on this topic. My first implementation for prototyping this approach was through Python using the OpenCV wrappers for python.
The methodology described in the paper can be summarized as stated below.
- Use a Gaussian filter to reduce noise in the image;
- Convert the image to HSI colour space. This will separate the image into Hue, Saturation and Intensity values. While Hue and Saturation describes the colour, Intensity describes the lightness or how bright it is.
- Histogram Hue & Intensity.
- Separate the image into 4 parts. Front, Left, Right and Far.
- Generate a threshold on the Histogram using the Front partition.
- Detect if there are obstacles at the Far,Left or Right using this threshold.
We make few assumptions when navigating an environment with this approach.
- The current Front is a good sample of an area without obstacles.
- There are no low hanging objects.
If we never move into objects, our Front partition will be free of obstacles, and thus we will always have a good reference point to decide what are obstacles in the environment.
Auditory feedback will be given depending on where we detect obstacles, to aid in navigating.