Motion segmentation

Segmentation of images in a video sequence based on the motion properties of objects can be very useful in building more sophisticated applications. For stationary cameras, background subtraction has been the stepping stone for many intelligent applications like tracking, event detection, and object detection. My research is aimed at enabling such applications with moving cameras as well.

Moving camera systems (hand-held cameras, automobile cameras)

Most existing algorithms use optical flow - the motion of pixels in the image plane, for segmenting objects in moving cameras. We use optical flow in a new algorithm that uses Dirichlet Processes to automatically estimate the number of moving objects and their motion parameters in a non-parametric model.
Our algorithm segments stationary objects as background irrespective of their depth. Foreground objects are clustered consistently with respect to their true world motion parameters. It works well on a wide range of videos listed below. Here is our ICCV 2013 paper describing the system. Video results and more comparisons can be found in the supplementary zip file.

If you are interested in our data set for complex background videos, it can be found here.
We are currently working on improving its accuracy and speed. The algorithm currently uses input from an optical flow software package. Further improvements to our algorithm can be made by integrating the segmentation and the optical flow estimation process within our probabilistic model. Here are some sample videos showing performance in a wide range of scenarios:
cars8.
forest.
parking.
marple3.
marple7.
marple9.
girl.
birdfall2.
tennis.
store.
drive.
traffic.
cars10.
cars1.

Stationary camera systems (Background modeling)

[code] [data set]

By separating the various components of a background modeling system which are the background likelihood, the foreground likelihood, and priors for background and foreground, we developed a very simple and intuitive probabilistic model for background modeling described in this paper. Another key feature in that we explicitly model spatially dependent priors in our model.
We describe certain strenghts of our model over earlier successful models, in particular the joint-domain range model, in our BMVC 2012 paper
Most existing methods do not adapt the variance parameter in their model for each pixel and this can cause errors. We propose a pixel-wise variance and describe an adaptive method for variance selection in our CVPR 2012 paper.
Here are some videos comparing our results to the earlier state of the art (Our results are in the last row of each video)
Bootstrap.
Curtain.
Escalator.
ShoppingMall.
Lobby.