Dynamic Codebook for Foreground Segmentation in a Video
Main Article Content
Abstract
The foreground segmentation in a video is a way to extract changes in image sequences. It is a key task in an early stage of many applications in the computer vision area. The information of changes in the scene must be segmented before any further analysis could be taken place. However, it remains with difficulties caused by several real-world challenges such as cluttered backgrounds, changes of the illumination, shadows, and long-term scene changes. This paper proposes a novel method, namely a dynamic codebook (DCB), to address such challenges of the dynamic backgrounds. It relies on a dynamic modeling of the background scene. Initially, a codebook is constructed to represent the background information of each pixel over a period of time. Then, a dynamic boundary of the codebook will be made to support variations of the background. The revised codebook will always be adaptive to the new background's environments. This makes the foreground segmentation more robust to the changes of background scene. The proposed method has been evaluated by using the changedetection.net (CDnet) benchmark which is a well-known video dataset for testing change-detection algorithms. The experimental results and comprehensive comparisons have shown a very promising performance of the proposed method.
Article Details
References
F. Yan, W. Christmas, and J. Kittler, “A tennis
ball tracking algorithm for automatic annotation
of tennis match,” in British machine vision conference, vol. 2, 2005, pp. 619–628.
R. A. Hadi, G. Sulong, and L. E. George, “Vehicle detection and tracking techniques: a concise
review,” arXiv preprint arXiv:1410.5894, 2014.
W. Kusakunniran, “Recognizing gaits on spatiotemporal feature domain,” IEEE Transactions
on Information Forensics and Security, vol. 9,
no. 9, pp. 1416–1423, 2014.
R. Poppe, “A survey on vision-based human action recognition,” Image and vision computing,
vol. 28, no. 6, pp. 976–990, 2010.
N. Prabhakar, V. Vaithiyanathan, A. P. Sharma,
A. Singh, and P. Singhal, “Object tracking using
frame differencing and template matching,” Research Journal of Applied Sciences, Engineering
and Technology, vol. 4, no. 24, pp. 5497–5501,2012.
D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russell, “Towards robust
automatic traffic scene analysis in real-time,”
in Pattern Recognition, 1994. Vol. 1-Conference
A: Computer Vision & Image Processing.,
Proceedings of the 12th IAPR International Conference on, vol. 1. IEEE, 1994, pp. 126–131.
J. N. Kapur, P. K. Sahoo, and A. K. Wong, “A
new method for gray-level picture thresholding
using the entropy of the histogram,” Computer
vision, graphics, and image processing, vol. 29,
no. 3, pp. 273–285, 1985.
N. Otsu, “A threshold selection method from
gray-level histograms,” Automatica, vol. 11, no.
-296, pp. 23–27, 1975.
C. R. Wren, A. Azarbayejani, T. Darrell, and
A. P. Pentland, “Pfinder: Real-time tracking of
the human body,” IEEE Transactions on pattern
analysis and machine intelligence, vol. 19, no. 7,
pp. 780–785, 1997.
T. Horprasert, D. Harwood, and L. S. Davis, “A
statistical approach for real-time robust background subtraction and shadow detection,” in
Ieee iccv, vol. 99, 1999, pp. 1–19.
A. Elgammal, D. Harwood, and L. Davis, “Nonparametric model for background subtraction,”
in European conference on computer vision.
Springer, 2000, pp. 751–767.
A. Ilyas, M. Scuturici, and S. Miguet, “Real
time foreground-background segmentation using
a modified codebook model,” in Advanced Video
and Signal Based Surveillance, 2009. AVSS’09.
Sixth IEEE International Conference on. IEEE,
, pp. 454–459.
Y. Li, F. Chen, W. Xu, and Y. Du, “Gaussianbased codebook model for video background subtraction,” in International Conference on Natural Computation. Springer, 2006, pp. 762–765.
S. ITing, S.-C. Hsu, and C.-L. Huang, “Hybrid
codebook model for foreground object segmentation and shadow/highlight removal,” Journal
of Information Science and Engineering, vol. 30,
pp. 1965–1984, 2014.
M. A. Mousse, E. C. Ezin, and C. Motamed, “Foreground-background segmentation
based on codebook and edge detector,” in SignalImage Technology and Internet-Based Systems
(SITIS), 2014 Tenth International Conference
on. IEEE, 2014, pp. 119–124.
L. Maddalena and A. Petrosino, “A selforganizing approach to background subtraction for visual surveillance applications,” IEEE
Transactions on Image Processing, vol. 17, no. 7,
pp. 1168–1177, 2008.
S. Bianco, G. Ciocca, and R. Schettini, “How
far can you get by combining change detection
algorithms?” arXiv preprint arXiv:1505.02921,
Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad,
Y. Benezeth, and P. Ishwar, “Cdnet 2014: an
expanded change detection benchmark dataset,”
in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 387–394.
S. Brutzer, B. H¨oferlin, and G. Heidemann,
“Evaluation of background subtraction techniques for video surveillance,” in Computer Vision and Pattern Recognition (CVPR), 2011
IEEE Conference on. IEEE, 2011, pp. 1937–
H. Sajid and S.-C. S. Cheung, “Background subtraction for static & moving camera,” in Image Processing (ICIP), 2015 IEEE International
Conference on. IEEE, 2015, pp. 4530–4534.
K. Kim, T. H. Chalidabhongse, D. Harwood, and
L. Davis, “Real-time foreground–background
segmentation using codebook model,” Real-time
imaging, vol. 11, no. 3, pp. 172–185, 2005.
P. Noriega, B. Bascle, and O. Bernier, “Local
kernel color histograms for background subtraction.” in VISAPP (1), 2006, pp. 213–219.
F. J. L´opez-Rubio, E. L´opez-Rubio, R. M.
Luque-Baena, E. Dominguez, and E. J. Palomo,
“Color space selection for self-organizing map
based foreground detection in video sequences,”
in 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014, pp. 3347–
H. Sajid and S.-C. S. Cheung, “Universal multimode background subtraction,” Submitted to
IEEE Transactions on Image Processing, 2015.
L. Maddalena and A. Petrosino, “A fuzzy
spatial coherence-based approach to background/foreground separation for moving object
ECTI TRANSACTIONS ON COMPUTER AND INFORMATION TECHNOLOGY VOL.10, NO.2 November 2016
detection,” Neural Computing and Applications,
vol. 19, no. 2, pp. 179–186, 2010.
R. Wang, F. Bunyak, G. Seetharaman, and
K. Palaniappan, “Static and moving object detection using flux tensor with split gaussian models,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition
Workshops, 2014, pp. 414–418.
M. Sedky, M. Moniri, and C. C. Chibelushi,
“Spectral-360: A physics-based technique for
change detection,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern
Recognition Workshops, 2014, pp. 399–402.
K. Wang, C. Gou, and Y. Liu, “M4cd: A robust change detection method with multimodal
background modeling and multi-view foreground
learning,” Submitted to IEEE Transactions on
Image Processing, 2015.
M. D. Gregorio and M. Giordano, “Wisardrp for
change detection in video sequences,” in Submitted to the IEEE Conference on Computer Vision
and Pattern Recognition, 2016.
M. De Gregorio and M. Giordano, “Change
detection with weightless neural networks,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 403–407.
G. Ram´ırez-Alonso and M. I. Chac´on-Murgu´ıa,
“Auto-adaptive parallel som architecture with a
modular analysis for dynamic object segmentation in videos,” Neurocomputing, vol. 175, pp.
–1000, 2016.
G. Allebosch, D. Van Hamme, F. Deboeverie,
P. Veelaert, and W. Philips, “C-efic: Color and
edge based foreground background segmentation with interior classification,” in International
Joint Conference on Computer Vision, Imaging
and Computer Graphics. Springer, 2015, pp.
–454.
G. Allebosch, F. Deboeverie, P. Veelaert, and
W. Philips, “Efic: edge based foreground background segmentation and interior classification
for dynamic camera viewpoints,” in International Conference on Advanced Concepts for Intelligent Vision Systems. Springer, 2015, pp.
–141.
Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study
of background subtraction algorithms,” Journal
of Electronic Imaging, vol. 19, no. 3, 2010.
Z. Zivkovic and F. van der Heijden, “Efficient
adaptive density estimation per image pixel for
the task of background subtraction,” Pattern
recognition letters, vol. 27, no. 7, pp. 773–780,
D. Liang and S. Kaneko, “Improvements and experiments of a compact statistical background
model,” arXiv preprint arXiv:1405.6275, 2014.
S. Varadarajan, P. Miller, and H. Zhou, “Spatial
mixture of gaussians for dynamic background
modelling,” in Advanced Video and Signal Based
Surveillance (AVSS), 2013 10th IEEE International Conference on. IEEE, 2013, pp. 63–68.
B. Wang and P. Dudek, “A fast self-tuning background subtraction algorithm,” in Proceedings of
the IEEE Conference on Computer Vision and
Pattern Recognition Workshops, 2014, pp. 395–
Z. Zivkovic, “Improved adaptive gaussian mixture model for background subtraction,” in Pattern Recognition, 2004. ICPR 2004. Proceedings
of the 17th International Conference on, vol. 2.
IEEE, 2004, pp. 28–31.
X. Lu, “A multiscale spatio-temporal background model for motion detection,” in 2014
IEEE International Conference on Image Processing (ICIP). IEEE, 2014, pp. 3268–3271.
C. Stauffer and W. E. L. Grimson, “Adaptive
background mixture models for real-time tracking,” in Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference
on., vol. 2. IEEE, 1999.
A. Miron and A. Badii, “Change detection based
on graph cuts,” in 2015 International Conference on Systems, Signals and Image Processing
(IWSSIP). IEEE, 2015, pp. 273–276.