Substantial performance improvement of a wide area video surveillance network can be obtained with addition of a Line-of-Sight sensor. The research described in this thesis shows that while the Line-of-Sight sensor cannot monitor areas with the ubiquity of video cameras alone, the combined network produces substantially fewer false alarms and superior location precision for numerous moving people than video. Recent progress in fabrication of inexpensive video cameras have triggered a new approach to wide area surveillance of busy areas such as modeling an airport corridor as a distributed sensor network problem. The computation and communication to establish image registration between the cameras grows rapidly as the number cameras increases. Computation is required to detect people in each image; establish a correspondence between people in two or more images; compute exact 3-D position from each corresponding-pair; and temporally track targets in space-and-time. Substantial improvement can be obtained with addition of a Line-of-Sight sensor as a location detection system to decoupling the detection, localization, and identification subtasks. That is, if the ‘where’ can be answered by a location detection system, the ‘what’ can be addressed by the video most effectively.