Autonomous detection of lane markers improves road safety, and purely visual tracking is desirable for widespread vehicle compatibility and reducing sensor intrusion, cost, and energy consumption. However, visual approaches are often ineffective because of a number of factors, including but not limited to occlusion, poor weather conditions, and paint wear-off. We present an approach towards robust lane tracking for assisted and autonomous driving, particularly under poor visibility. Our method, named SafeDrive, attempts to improve visual lane detection approaches in drastically degraded visual conditions without relying on additional active sensors other than vision and location data, sensors that are readily available on a standard smartphone.
In situations where lane markers are not visible, e.g., due to being partially or totally covered by snow, poor lighting, or glare from sunlight, the proposed approach uses location information of the vehicle to find alternate imagery of the road at that particular location from an existing database of such images. This database would be indexed by GPS locations, and may contain more than one image of a particular location, taken at different times. By matching against the current image acquired from a forward-looking camera, two most similar images are selected from the database. Subsequently, the alternate images are used to coarsely reconstruct a three-dimensional view of the street at the current location, which would include adjacent buildings and road markers. Adjacent buildings are reconstructed by feature triangulation. While road makers are less distinguishable, stereo-based reconstruction is used to reconstruct these. Because of the predominantly forward motion of vehicles and lack of lateral movement, traditional stereo rectification does not perform robustly as disparity cannot be accurately computed, affecting the quality of depth estimation. To address this issue, the proposed approach uses polar rectification so that all possible camera movements are allowed. After 3D reconstruction, the adjacent buildings are matched with current image, in order to estimate the current position of the camera in the 3D street view by solving the PnP problem. Finally, the 3D lane markers are projected onto the current image. A number of lane detection algorithms can be used to locate the lanes afterwards.