Archive | December, 2013

Generating Object Proposals

22 Dec

proposal_teaserThis is a followup post to A Seismic Shift in Object Detection. In the earlier post I discussed the resurgence of segmentation for object detection, in this post I go into more technical detail about the algorithms for generating the segments and object proposals. If you haven’t yet, you should read my previous post first. 🙂

First, a brief historical overview. In classic segmentation the goal was to assign every pixel in an image to one of K labels such that distinct objects receive unique labels. So ideally an algorithm would generate separate segments for your cat and your couch and pass these on to the next stage of processing. Unfortunately, this is a notoriously difficult problem, and arguably, it’s simply the wrong problem formulation. Classic segmentation is exceedingly difficult: mistakes are irreversible and incorrect segments harm subsequent processing. If your kitty had its head chopped off or was permanently merged with the couch, well, tough luck. Thus classic segmentation rarely serves as a pre-processing step for detection and workarounds have been developed (e.g. sliding windows).

hoiem

A major innovation in how we think about segmentation occurred around 2005. In their work on estimating geometric context, Derek Hoiem et al. proposed to use multiple overlapping segmentations, this was explored further by Bryan Russell and Tomasz Malisiewicz and their collaborators (see here and here). The idea is as simple as it sounds: generate multiple candidate segmentations, and while your kitty may be disfigured in many, hopefully at least one of the segmentations will contain her whole and unharmed. This was a leap in thinking because it shifts the focus to generating a diversity of segmentations as opposed to a single and perfect (but unachievable) segmentation.

The latest important leap, and the focus of this post, was first made in 2010 concurrently by three groups (Objectness, CPMC, Object Proposals). The key observation was this: since the unit of interest for subsequent processing in object detection and related tasks is a single object segment, why exhaustively (and uniquely) label every pixel in an image? Instead why not directly generate object proposals (either segments or bounding boxes) without attempting to provide complete image segmentations? Doing so is both easier (no need to label every pixel) and more forgiving (multiple chances to generate good proposals). Sometimes a slight problem reformulation makes all the difference!

Below I go over five of the arguably most important papers on generating object proposals (a list of additional paper can be found at the end of this post). Keep reading for details or skip to the discussion below.


Objectness: One of the earliest papers on generating object proposals. The authors sample and rank 100,000 windows per image according to their likelihood of containing an object. This `objectness’ score is based on multiple cues derived from saliency, edges, superpixels, color and location. The proposals tend to fit objects fairly loosely, but the first few hundred are of high quality (see Fig. 6 in this paper). The algorithm is fast (a few seconds per image) and outputs boxes.


CPMC: Published concurrently with objectness, the idea is to use graph cuts with different random seeds and parameters to obtain multiple binary foreground / background segmentations. Each generated foreground mask serves as an object proposal, the proposals are ranked according to a learned scoring function. The algorithm is slow (~8 min / image) as it relies on the gPb edge detector but it generates high quality segmentation masks (see also this companion paper).


Object Proposals: Similarly to CPMC (and published just a few months after), the authors generate multiple foreground / background segmentations and use these as object proposals. Same strengths and weaknesses as CPMC: high quality segmentation masks but long computation time due in part to reliance on gPb edges.


sssSelective Search: As discussed in my previous post, arguably the top three methods for object detection as of ICCV13 all used selective search in their detection pipelines. The key to the success of selective search is its fast speed (~8 seconds / image) and high recall (97% of objects detected given 10000 candidates per image). Selective search is based on computing multiple hierarchical segmentations using superpixels from Felzenszwalb and Huttenlocher (F&H) computed on different color spaces. Object proposals are the various segments in the hierarchies or bounding boxes surrounding them.


RPRandomized Prim’s (RP): a simple and fast approach to generating high quality proposal boxes (again based on F&H superpixels). The authors propose a randomized greedy algorithm for computing sets of superpixels that are likely to occur together. The quality of proposals is high and object coverage is good given a large number of proposals (1000 – 10000). RP is the fastest of the batch (<1 second per image) and is a promising pre-processing step for detection.


So what’s the best approach? It depends on the constraints and application. If speed is critical the only candidates are Objectness, Selective Search or RP.  For high quality segments CPMC or Object Proposals are best, the bounding boxes returned by RP also appear promising. For object detection, recall is critical and thus generating thousands of candidates with Selective Search or RP is likely the best bet. For domains where a more aggressive pruning of windows is necessary, for example, weakly supervised or unsupervised learning, Objectness or CPMC are the most promising candidates. Overall, which object proposal method is best suited depends on the target application and computational constraints.

One interesting observation is that all five algorithms described above utilize either gPb or F&H for input edges or superpixels. The quality and speed of the input edges detectors help determine the speed of the resulting proposals and their adherence to object boundaries. gPb is accurate but slow (multiple minutes per image) while F&H is fairly fast but of lower quality. So, this seems to be a perfect spot to drop a shameless plug for our own work: our recent edge detector presented at ICCV runs in real time (30 fps) and achieves edges of similar quality to gPb (and even somewhat higher). Read more here. 🙂

SED

I expect that after its recent successes object proposal generation will continue to receive strong interest from the community. I hope to see the development of approaches that are faster, have higher recall with fewer proposals, and better adhere to object boundaries. Downstream it will be interesting to see more algorithms take advantage of the segmentation masks associated with the proposals (currently most but not all detection approaches discard the segmentation masks). And of course I have to wonder, will we experience yet another paradigm shift in segmentation? Let’s see where we end up…


Below is a list of additional papers on generating object proposals. If I missed anything relevant please email me or leave a comment and I’ll add a link!

A Seismic Shift in Object Detection

10 Dec

Object detection has undergone a dramatic and fundamental shift. I’m not talking about deep learning here – deep learning is really more about classification and specifically about feature learning. Feature learning has begun to play and will continue to play a critical role in machine vision. Arguably in a few years we’ll have a diversity of approaches for learning rich features hierarchies from large amounts of data; it’s a fascinating topic. However, as I said, this post is not about deep learning.

Rather, perhaps an even more fundamental shift has occurred in object detection: the recent crop of top detection algorithms abandons sliding windows in favor of segmentation in the detection pipeline. Yes, you heard right, segmentation!

ImageImage

First some evidence for my claim. Arguably the three top performing object detection systems as of the date of this post (12/2013) are the following:

  1. Segmentation as Selective Search++ (UvA-Euvision)
  2. Regionlets for Object Detection (NEC-MU)
  3. Rich Feature Hierarchies for Accurate Object Detection (R-CNN)

The first two are the winning and second place entries on the ImageNet13 detection challenge. The winning entry (UvA-Euvision) is an unpublished extension of Koen van de Sande earlier work (hence I added the “++”). The third paper is Ross Girshick et al.’s recent work, and while Ross did not compete in the ImageNet challenge due to lack of time, his results on PASCAL are superior to the NEC-MU numbers (as far as I know no direct comparison exists between Ross’s work and the winning ImageNet entry).

Now, here’s the kicker: all three detection algorithms shun sliding window in favor of a segmentation pre-processing step, specifically the region generation method of Koen van de Sande et al., Segmentation as Selective Search.

Image

Now this is not your father’s approach to segmentation — there’s a number of notable differences that allows the current batch of methods to succeed whereas yesteryear’s methods failed. This is really a topic for another post, but the core idea is to generate around 1-2 thousand candidate object segments per image that with high probability coarsely capture most of the objects in the image. The candidate segments themselves may be noisy and overlapping and in general need not capture the objects perfectly. Instead they’re converted to bounding boxes and passed into various classification algorithms.

Incidentally, Segmentation as Selective Search is just one of many recent algorithms for generating candidate regions / bounding boxes (objecteness was perhaps the first), however, again this is a subject for another post…

So what advantage does region generation give over sliding windows approaches? Sliding window methods perform best for objects with fixed aspect ratio (e.g., faces, pedestrians). For more general object detection a search must be performed over position, scale and aspect ratio. The resulting 4 dimensional search space is large and difficult to search over exhaustively. One way to look at deformable part models is that they perform this search efficiently, however, this places severe restrictions on the models themselves. Thus we were stuck as a community: while we and our colleagues in machine learning derived increasingly sophisticated classification machinery for various problems, for object detection we were restricted to using approaches able to handle variable aspect ratio efficiently.

The new breed of segmentation algorithms allows us to bypass the need for efficiently searching over the space of all bounding boxes and let’s us employ more sophisticated learning machinery to perform classification. The unexpected thing is they actually work!

This is a surprising development and goes counter to everything we’ve known about object detection. For the past ten years, since Viola and Jones popularized the sliding window framework, dense window classification has dominated detection benchmarks (e.g. PASCAL or Caltech Peds). While there have been other paradigms, based for example on interest points, none could match the reliability and robustness of sliding windows. Now all this has changed!

2007_008221

Object detectors have evolved rapidly and their accuracy has increased dramatically over the last decade. So what have we learned? A few lessons come to mind: design better features (e.g. histograms of gradients), employ efficient classification machinery (e.g. cascades), and use flexible models to handle deformation and variable aspect ratios (e.g. part based models). And of course use a sliding window paradigm.

Recent developments have shattered our collective knowledge of how to perform object detection. Take a look at this diagram from Ross’s recent work:

Image

Observe:

  1. sliding windows -> segmentation
  2. designed features -> deep learned features
  3. parts based models -> linear SVMs

Now ask yourself: a few years ago, would you have expected such a dramatic overturning of all our knowledge about object detection!?

Our field has jumped out of a local minima and exciting times lie ahead. I expect progress to be rapid in the new few years – it’s an amazing time to be doing research in object detection 🙂