Available Software and Databases



Go to (within this page):

VXL: The Vision X(something) Libraries [back to menu]

Many of the projects in our research lab use the VXL libraries . Here you can find useful links and instructions that will help you building these libraries

LEMSVXL [back to menu]


Many of the projects in our research lab use the our local LEMSVXL libraries . Here you can find useful links and instructions that will help you building these libraries


Software [back to menu]

Third Order Edge Detector

The traditional approach to edge detection (e.g. canny) using image derivatives localizes edges at the maxima of the gradient magnitude |grad I| in the direction of the gradient grad I/|grad I|, which gives grad(|grad I|) . grad I/|grad I| = 0. In Cartesian coordinates, this condition can be written as F(x,y) = Ix^2*Ixx + 2*Ix*Iy*Ixy + Iy^2*Iyy = 0, which involves up to second-order derivatives. However, the edge orientation is simply taken as the orthogonal to the image gradient, which only involves first order derivatives. This is why the orientations of the edges as computed by the gradient operator are incorrect. The tangent computation needs to involve one higher order gradient than the computation to localize the edges. Hence it needs to involve third-order derivatives. The tangent to the edge contour can be correctly computed by the gradient of F(x,y) at the zero level set. The orientation of the edge thus involves up to third-order derivatives. For this reason we have chosen to call our edge detector the "Third-order orientation detector". In the image above, edgels computed by the traditional method (in red) are compared to those computed by the third-order orientation operator (in green). Notice the consistency of the third-order edges with respect to the edge curves.

Euler Spiral Construction

This software reduces the construction of an Euler spiral from a pair of points and tangents at these points to solving a nonlinear system of equations involving Fresnel Integrals whose solution relies on optimization from a suitable initial condition constrained to satisfy given boundary conditions. Since the choice of an appropriate initial curve is critical in this optimization, an optimal solution is analytically derived in the class of biarc curves, which is then used as the initial curve. The resulting interpolations yield intuitive interpolation across gaps and occlusions, and are extensible, in contrast to the scale-invariant version of elastica.


Edge-Based Figure-Ground Segregation of Moving Objects in Videos from Stationary Camera

This software is an approach to ”background modeling” of a time sequence of images acquired from a stationary camera. The approach is based on sub-pixel edge maps in contrast to the traditional intensity-based background modeling, and is motivated by the observation that intensity-based background models are sensitive to sudden changes in illumination and camera parameters, e.g., gain control. It is shown that the sub-pixel edgemaps show greater robustness to such changes and furthermore, require far fewer training images than comparable intensity-based models, even when sudden illumination changes are not an issue. In addition, in intensity based background models the false positive rate of classifying a foreground pixel as background is high due to the frequent accidental alignment of figure intensities with the background model. In contrast, background models based sub-pixel edge maps are highly localized and specific in orientation. They face a figure-ground ambiguity much less frequently due to the reduced likelihood of accidental alignment. The software models the sub-pixel edge position and orientation using a Mixture of Gaussians model without requiring a higher resolution discretization grid. It has been tested on a wide range of videos and the resulting background models result in a much more selective figure-ground segregation and are easier to train. When you download the package, please read 'installation_steps.txt' and then 'bg_modeling.txt'.


Segregation of Moving Objects Using Elastic Matching

This is a method for figure-ground segregation of moving objects from monocular video sequences. The approach is based on tracking extracted contour fragments, in contrast to traditional approaches which rely on feature points, regions, and unorganized edge elements. Specifically, a notion of similarity between pairs of curve fragments appearing in two adjacent frames is developed and used to find the curve correspondence. This similarity metric is elastic in nature and in addition takes into account both a novel notion of transitions in curve fragments across video frames and an epipolar constraint. Color/intensity of the regions on either side of the curve is also used to reduce the ambiguity and improve efficiency of curve correspondence. The retrieved curve correspondence is then used to group curves in each frame into clusters based on the pairwise similarity of how they transform from one frame to the next. Results on video sequences of moving vehicles show that using curve fragments for tracking produces a richer segregation of figure from ground than current region or feature-based methods. When you download the package, please read 'installation_steps.txt' and then 'curve_tracking.txt'.


A Multi-Stage Approach to Curve Extraction [paper] [code]

A multi-stage approach is developed which starts with local grouping of edges under geometric constrains and follows by a contour-level decision-making (merge / not merge) procedure for local groups of edges. Learning frameworks are introduced for both merge/not merge decision and seletion of veridical contour fragments. A set of novel contour-level features are introduced for both problems. 

Datasets - Shape [back to menu]

Binary Shape Databases


The 99-Shape Database

The 216-Shape Database

The 1070-Shape Database


Brown Extended ETHZ Shape Dataset 


Please visit the following page: http://vision.lems.brown.edu/datasets/brown-ethz  

Curve Fragment Ground Truth Dataset(CFGD) [paper] [code] [datasets]


Comparing to Berkeley Segmentation Datasets (BSDS),  CFGD annotated contour fragment (a ordered set of edges) individually. An Evaluation framework is introduced for contour extraction and edge linking,  which solves the problem of multiple-to-multiple contour fragments assignment. 

Datasets - Multiview  [back to menu]


Providennce Aerial Multiview - PAMView


Collected from a helicopter flying around Providence, RI. The helicopter follows a ring-trayectory around multiple sites, making available a 360 view of the region of interest. Images and camera matrices are available.


More details and download: http://www.lems.brown.edu/~mir/helicopter_providence/sites.html

Multi-view Dense Point Correspondence Ground-Truth Dataset


A calibrated 13-view dataset with probabilistic dense point correspondence ground truth.


More details and downloads: http://vision.lems.brown.edu/datasets/dense-corr  

We gratefully acknowledge the support of NSF Grant 1116140   

Multiple View and Varying Illumination Dataset


Dataset to evaluate multiple view change detection algorithms under greatly varying illumination.

Thousands of images of a scene were taken over 6 months and are partially calibrated. Limited ground truth and measurements are also available.


More details and downloads: dataset page