We present an approach towards robust lane tracking for assisted and autonomous driving, particularly under poor visibility. Autonomous detection of lane markers improves road safety, and purely visual tracking is desirable for widespread vehicle compatibility and reducing sensor intrusion, cost, and energy consumption. However, visual approaches are often ineffective because of a number of factors, including but not limited to occlusion, poor weather conditions, and paint wear-off. Our method, named SafeDrive, attempts to improve visual lane detection approaches in drastically degraded visual conditions without relying on additional active sensors. In scenarios where visual lane detection algorithms are unable to detect lane markers, the proposed approach uses location information of the vehicle to locate and access alternate imagery of the road and attempts detection on this secondary image. Subsequently, by using a combination of feature-based and pixel-based alignment, an estimated location of the lane marker is found in the current scene. We demonstrate the effectiveness of our system on actual driving data from locations in the United States with Google Street View as the source of alternate imagery.