ABSTRACT
Varying illumination and image blur are some of major challenges faced by contemporary methods of optical flow estimation. Despite significant advancement, these aspects have not received much of attention by modern-day methods. Latest work in this field is heavily affected and produce adverse results when dealing with images containing variable illumination and blur. In this paper, we investigate the effects of color space transformations on optical flow estimation from degraded and noisy images. In our experiments, clean and noisy images have been used. These images contain different kinds of blur and atmospheric effects such as fog, mist, shadows and dark regions. By estimating optical flow with three types of sequences in parallel (super clean, clean and noisy), and using four popular color systems, the effects of color space transformation have been observed on the estimated flow fields. The four color systems include RGB (red, blue, green), HSV (hue, saturation, value), HSL (hue, saturation, lightness) and XYZ (as standardized by the International Commission on Illumination in 1931). It is found that output of an optical flow algorithm not only depends on the color system adopted, but some color spaces tend to favor some special type of image sequences. For instance, XYZ color system is more favorable for the images abiding by the brightness constancy assumption while HSV color space is more suitable for blurry and noisy images. While keeping rest of the parameters unchanged but only transforming the color-space, we estimated the optical flow. Obviously the results of an algorithm applied to clean images for optical flow, would not be consistent with a flow estimated from same images containing noise. The objective is to compare this adversative effect for different color spaces. The flow estimation errors in four color systems have been reported and compared, and the best color-space is pointed out in each case. The paper also discusses the possible factors behind these variable outcomes with an insight into the basic frameworks of traditional methods for optical flow.
- T. Brox and J. Malik. "Large Displacement Optical Flow Descriptor Matching in Variational Motion Estimation" IEEE Transactions on Pattern Analysis and Machine Learning Vol.33, No.3, pp. 500--513, 2011.Google ScholarDigital Library
- P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid, "DeepFlow: Large displacement optical flow with deep matching," Proc. IEEE Int. Conf. Comput. Vis., no. Section 2, pp. 1385--1392, 2013.Google ScholarDigital Library
- J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid, "EpicFlow: Edge-preserving interpolation of correspondences for optical flow," Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. vol. 07-12-June, pp. 1164--1172, 2015.Google ScholarCross Ref
- J. Hur and S. Roth, "MirrorFlow: Exploiting Symmetries in Joint Optical Flow and Occlusion Estimation," Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Oct, pp. 312--321, 2017.Google ScholarCross Ref
- Y. Yang and S. Soatto, "S2F: Slow-to-fast interpolator flow," Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, vol. 2017-Janua, pp. 3767--3776, 2017.Google Scholar
- J. Wulff, L. Sevilla-Lara, and M. J. Black, "Optical flow in mostly rigid scenes," Proc.30th IEEE Conf. Comput. Vis. Pattern Recognition, vol. 2017-Jan, pp. 6911--6920, 2017.Google Scholar
- Z. Chen, H. Jin, Z. Lin, S. Cohen, and Y. Wu, "Large displacement optical flow from nearest neighbor fields," Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2443--2450, 2013.Google Scholar
- Y. Hu, R. Song, and Y. Li, "Efficient coarse-to-fine patchmatch for large displacement optical flow," Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 5704--5712, 2016.Google Scholar
- Q. Chen and V. Koltun, "Full Flow: Optical Flow Estimation By Global Optimization over Regular Grids," Proc. IEEE Conf. Comput. Vis. Pattern Recognit. pp:1--9, 2016.Google ScholarCross Ref
- M. Menze, C. Heipke, and A. Geiger, "Discrete Optimization for Optical Flow," 37th GCPR 2015 vol. i, pp. 16--28, 2015.Google Scholar
- A. Dosovitskiy et al., "FlowNet: Learning optical flow with convolutional networks," Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 2758--2766, 2015.Google Scholar
- E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, "FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks," IEEE Conference on Computer Vision and Pattern Analysis, 2017.Google Scholar
- A. Ranjan, M. J. Black"Optical Flow Estimation using a Spatial Pyramid Network"pp.4161--4170.CVPR-2017Google Scholar
- S. Meister, J. Hur, and S. Roth, "UnFlow:Unsupervised Learning of Optical Flow with a Bidirectional Census Loss".arXiv:1711.07837v1 [cs.CV], 2017.Google Scholar
- D. Sun, X. Yang, M. Y. Liu, and J. Kautz, "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume" Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pat. Rec., vol. D, pp. 8934--8943, 2018.Google ScholarCross Ref
- P. Liu, M. Lyu, I. King, and J. Xu, "SelFlow: Self-Supervised Learning of Optical Flow" arXiv:1904.09117 [cs.CV], 2019Google Scholar
- R. Schuster, C. Bailer, O. Wasenmüller, and D. Stricker, "FlowFields++: Accurate Optical Flow Correspondences Meet Robust Interpolation" arXiv:1805.03517 [cs.CV], 2018Google Scholar
- D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, "A Naturalistic Open Source Movie for Optical Flow," ECCV, pp. 611--625, 2012.Google Scholar
- B. K. P. Horn and B. G. Schunck"Determining optical flow" Artif. Intell., vol.17, no. 1-3, pp. 185--203, 1981.Google ScholarDigital Library
Index Terms
- Effects of Color Systems' Transformation on Optical Flow Estimation of Noisy and Degraded Images
Recommendations
Does Dehazing Model Preserve Color Information?
SITIS '14: Proceedings of the 2014 Tenth International Conference on Signal-Image Technology and Internet-Based SystemsImage dehazing aims at estimating the image information lost caused by the presence of fog, haze and smoke in the scene during acquisition. Degradation causes a loss in contrast and color information, thus enhancement becomes an inevitable task in ...
Constructing cylindrical coordinate colour spaces
A cylindrical coordinate colour space (lightness, saturation/chroma, hue) is derived from an opponent colour space in the RGB space. It is shown how cylindrical coordinate colour models widely used in the literature are related to or can be reduced to ...
Color gamut transform pairs
Digital control of color television monitors—in particular, via frame buffers—has added precise control of a large subset of human colorspace to the capabilities of computer graphics. This subset is the gamut of colors spanned by the red, green, and ...
Comments