Detecting saliency in videos is a fundamental step in many computer vision systems. Sa-liency is the significant target(s) in the video. The object of interest is further analyzed for high‐level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive er-rors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD‐BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video comple-tion, a good background frame can be synthesized with the co‐existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre‐trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output de-teriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F‐measure results, obtained from the pan‐tilt‐zoom (PTZ) videos, show that our proposed framework outperforms some deep learning‐based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high‐ranking background subtraction methods by more than 3%.