AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Inpaint free version8/5/2023 Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3d. Ling, C.H., Lin, C.W., Su, C.W., Liao, H.Y.M., Chen, Y.S.: Video object inpainting using posture mapping. Jia, J., Tai, Y.W., Wu, T.P., Tang, C.K.: Video repairing under variable illumination using cyclic motions. Shih, T.K., Tan, N.C., Tsai, J.C.H.-Y.Z.: Video falsifying by motion interpolation and inpainting. Venkatesh, M.V., Cheung, S.S., Zhao, J.: Efficient object-based video inpainting. Granados, M., Tompkin, J., Kim, K.I., Grau, O., Kautz, J., Theobalt, C.: How not to be seen - object removal from videos of crowded scenes. Hu, Y., Rajan, D.: Hybrid shift map for video retargeting. Shen, Y., Lu, F., Cao, X., Foroosh, H.: Video completion for perspective camera under constrained motion. Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. Shih, T.K., Tang, N.C., Hwang, J.N.: Exemplar-based video inpainting without ghost shadow artifacts by maintaining temporal continuity. Patwardhan, K., Sapiro, G., Bertalmio, M.: Video inpainting under constrained camera motion. Patwardhan, K.A., Sapiro, G., Bertalmio, M.: Video inpainting of occluding and occluded objects. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. Pollefeys, M., Verbiest, F., Van Gool, L.: Surviving Dominant Planes in Uncalibrated Structure and Motion Recovery. Torr, P.H.S., Fitzgibbon, A.W., Zisserman, A.: The problem of degeneracy in structure and motion recovery from uncalibrated image sequences. 2–13 (2000)ĭebevec, P.E., Yu, Y., Borshukov, G.: Efficient view-dependent image-based rendering with projective texture-mapping. Shum, H., Kang, S.B.: Review of image-based rendering techniques. Keywordsīhat, P., Zitnick, C.L., Snavely, N., Agarwala, A., Agrawala, M., Cohen, M.F., Curless, B., Kang, S.B.: Using photographs to enhance videos of a static scene. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. Intensity differences between sources are smoothed using gradient domain fusion. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background.
0 Comments
Read More
Leave a Reply. |