In this paper we outline a fully parallel and locally connected computation model for the segmentation of motion events in video sequences based on spatial and motion information. Extraction of motion information from video series is very time consuming. Most of the computing effort is devoted to the estimation of motion vector fields, defining objects and determining the exact boundaries of these objects. The split and merge segmentation of different small areas, those obtained by oversegmentation, needs an optimization process. In our proposed algorithm the process starts from an oversegmented image, then the segments are merged by applying the information coming from the spatial and temporal auxiliary data: motion fields and notion history, calculated from consecutive image frames. This grouping process is defined through a similarity measure of neighboring segments, which is based on intensity, speed and the time-depth of motion-history. There is also a feedback for checking the merging process, by this feedback we can accept or refuse the cancellation of a segment-border. Our parallel approach is independent of the number of segments and objects, since instead of graph representation of these components, image features are defined on the pixel level. We use simple VLSI implementable functions like arithmetic and logical operators, local memory transfers and convolution. These elementary instructions are used to build up the basic routines such as motion displacement field detection, disocclusion removal, anisotropic diffusion, grouping by stochastic optimization. This relaxation-based motion segmentation can be a basic step of the effective coding of image series and other automatic motion tracking systems. The proposed system is ready to be implemented in a Cellular Nonlinear Network chip-set architecture.
|Number of pages||19|
|Publication status||Published - Feb 1 2001|
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering