Attachment  Size 

10081.png  150.34 KB 
VXL: The Vision X(something) Libraries [back to menu]
Many of the projects in our research lab use the VXL libraries . Here you can find useful links and instructions that will help you building these libraries
LEMSVXL [back to menu]
Many of the projects in our research lab use the our local LEMSVXL libraries . Here you can find useful links and instructions that will help you building these libraries
Third Order Edge DetectorThe traditional approach to edge detection (e.g. canny) using image derivatives localizes edges at the maxima of the gradient magnitude grad I in the direction of the gradient grad I/grad I, which gives grad(grad I) . grad I/grad I = 0. In Cartesian coordinates, this condition can be written as F(x,y) = Ix^2*Ixx + 2*Ix*Iy*Ixy + Iy^2*Iyy = 0, which involves up to secondorder derivatives. However, the edge orientation is simply taken as the orthogonal to the image gradient, which only involves first order derivatives. This is why the orientations of the edges as computed by the gradient operator are incorrect. The tangent computation needs to involve one higher order gradient than the computation to localize the edges. Hence it needs to involve thirdorder derivatives. The tangent to the edge contour can be correctly computed by the gradient of F(x,y) at the zero level set. The orientation of the edge thus involves up to thirdorder derivatives. For this reason we have chosen to call our edge detector the "Thirdorder orientation detector". In the image above, edgels computed by the traditional method (in red) are compared to those computed by the thirdorder orientation operator (in green). Notice the consistency of the thirdorder edges with respect to the edge curves. 
Euler Spiral ConstructionThis software reduces the construction of an Euler spiral from a pair of points and tangents at these points to solving a nonlinear system of equations involving Fresnel Integrals whose solution relies on optimization from a suitable initial condition constrained to satisfy given boundary conditions. Since the choice of an appropriate initial curve is critical in this optimization, an optimal solution is analytically derived in the class of biarc curves, which is then used as the initial curve. The resulting interpolations yield intuitive interpolation across gaps and occlusions, and are extensible, in contrast to the scaleinvariant version of elastica. 
EdgeBased FigureGround Segregation of Moving Objects in Videos from Stationary CameraThis software is an approach to ”background modeling” of a time sequence of images acquired from a stationary camera. The approach is based on subpixel edge maps in contrast to the traditional intensitybased background modeling, and is motivated by the observation that intensitybased background models are sensitive to sudden changes in illumination and camera parameters, e.g., gain control. It is shown that the subpixel edgemaps show greater robustness to such changes and furthermore, require far fewer training images than comparable intensitybased models, even when sudden illumination changes are not an issue. In addition, in intensity based background models the false positive rate of classifying a foreground pixel as background is high due to the frequent accidental alignment of figure intensities with the background model. In contrast, background models based subpixel edge maps are highly localized and specific in orientation. They face a figureground ambiguity much less frequently due to the reduced likelihood of accidental alignment. The software models the subpixel edge position and orientation using a Mixture of Gaussians model without requiring a higher resolution discretization grid. It has been tested on a wide range of videos and the resulting background models result in a much more selective figureground segregation and are easier to train. When you download the package, please read 'installation_steps.txt' and then 'bg_modeling.txt'. 
Segregation of Moving Objects Using Elastic MatchingThis is a method for figureground segregation of moving objects from monocular video sequences. The approach is based on tracking extracted contour fragments, in contrast to traditional approaches which rely on feature points, regions, and unorganized edge elements. Specifically, a notion of similarity between pairs of curve fragments appearing in two adjacent frames is developed and used to find the curve correspondence. This similarity metric is elastic in nature and in addition takes into account both a novel notion of transitions in curve fragments across video frames and an epipolar constraint. Color/intensity of the regions on either side of the curve is also used to reduce the ambiguity and improve efficiency of curve correspondence. The retrieved curve correspondence is then used to group curves in each frame into clusters based on the pairwise similarity of how they transform from one frame to the next. Results on video sequences of moving vehicles show that using curve fragments for tracking produces a richer segregation of figure from ground than current region or featurebased methods. When you download the package, please read 'installation_steps.txt' and then 'curve_tracking.txt'. 
A MultiStage Approach to Curve Extraction [paper] [code]A multistage approach is developed which starts with local grouping of edges under geometric constrains and follows by a contourlevel decisionmaking (merge / not merge) procedure for local groups of edges. Learning frameworks are introduced for both merge/not merge decision and seletion of veridical contour fragments. A set of novel contourlevel features are introduced for both problems. 
Binary Shape Databases
The 99Shape DatabaseThe 216Shape DatabaseThe 1070Shape Database

Brown Extended ETHZ Shape Dataset
Please visit the following page: http://vision.lems.brown.edu/datasets/brownethz 
Curve Fragment Ground Truth Dataset(CFGD) [paper] [code] [datasets]
Comparing to Berkeley Segmentation Datasets (BSDS), CFGD annotated contour fragment (a ordered set of edges) individually. An Evaluation framework is introduced for contour extraction and edge linking, which solves the problem of multipletomultiple contour fragments assignment. 
Providennce Aerial Multiview  PAMView
Collected from a helicopter flying around Providence, RI. The helicopter follows a ringtrayectory around multiple sites, making available a 360 view of the region of interest. Images and camera matrices are available.
More details and download: http://www.lems.brown.edu/~mir/helicopter_providence/sites.html 
Multiview Dense Point Correspondence GroundTruth Dataset
A calibrated 13view dataset with probabilistic dense point correspondence ground truth.More details and downloads: http://vision.lems.brown.edu/datasets/densecorr

Multiple View and Varying Illumination Dataset
Dataset to evaluate multiple view change detection algorithms under greatly varying illumination.Thousands of images of a scene were taken over 6 months and are partially calibrated. Limited ground truth and measurements are also available.
More details and downloads: dataset page 