Digital Archaeology: Applications of Computer Vision to Archaeology


Fragment Assembly

 

We present a complete system for the purpose of automatically assembling 3D pots given 3D measurements of their fragments commonly called sherds. A Bayesian approach formulated which, at present, models the data given a set sherd geometric parameters. Dense sherd measurement is obtained by scanning the outside surface of each with a laser scanner. Mathematical models, specified by a set of geometric parameters, represent the sherd surface and break curves on the outer surface (where sherds have broken apart). Optimal alignment of assemblies sherds, called configurations, is implemented as maximum likelihood estimation (MLE) of the surface and curve parameters given the measured sherd data for sherds in a configuration. 

 



Reveal

REVEAL (Reconstruction and Exploratory Visualization: Engineering meets ArchaeoLogy)

An NSF Project consisting of:

  • REVEAL: A System for Streamlined Powerful Sensing, Archiving, Extracting Information from, Visualizing and Communicating, Archaeological Site-excavation Data.  REVEAL is available to the archaeology community.
  • Core Computer-Vision/Pattern-Recognition/Machine-Learning Research with Applications to Archaeology and the Humanities.


Object Recognition and Detection


Object Recognition and Segmentation Using a Shock Graph Based Shape Model

In this project, we are developing an object recognition and segmentation framework that uses a shock graph based shape model. Our fragment-based generative model is capable of generating a wide variation of shapes as instances of a given object category. In order to recognize and segment objects, we make use of a progressive selection mechanism to search among the generated shapes for the category instances that are present in the image. The search begins with a large pool of candidates identified by the dynamic programming (DP) algorithm and progressively reduces it in size by applying series of criteria.



Fragment Based Object Recognition

Fragment-based Object Recognition



Object Recognition in Probabilistic 3D Scenes

A semantic description of 3-d scenes is essential to many urban and surveillance applications. The general problems of object localization and class recognition in Computer Vision are traditionally performed in 2D images. In contrast, this project aims to reason about the state of the 3-d world. More specifically, this project uses probabilistic volumetric models of a scene geometry and appearance to perform object categorization tasks directly in 3-d. The methods and results presented here were fisrt accepted as a full paper (30 min. oral presentation) at the International Conference of Pattern Recognition Application and Methods, ICPRAM 20112. An more recent and comprenhensive evaluation has been accepted for publication at the IEEE Journal of Selected Topics in Signal Processing



Object Part Hypotheses

Object Part Hypotheses



Visualization and Human Interfaces


Advancing Digital Scholarship with Touch‐Surfaces and Large‐Format Interactive Display Walls

This project explores a  multi‐stage  program  of  research,  implementation,  and  evaluation  of collaborative,  interactive,  large‐screen,  gesture‐driven  displays  used  to  enhance  a  wide  range  of scholarly  activities  and  creative  expressions.    Although this project includes research topics such as:  seamless imaging, touch‐enabled computing, parallel rendering, design methodologies and intelligent networking; our main focus is camera-based interaction, i.e., study how to track people's locations, their features, hand-held objects, and hand gestures; using this information to trigger actions and to appropriately render imagery and sound, making possible an exciting multi-user experience with the computer system.
As an initial accomplishment, we have constructed the first version of our scalable, high-resolution display wall...



Multiview Geometry Reconstruction and Calibration


Probabilistic Volumetric Modeling

Pollard and Mundy (2007) proposed a probabilistic volume model that can represent the ambiguity and uncertainty in 3-d models derived from multiple image views. In Pollard's model, a region of three-dimensional space is decomposed into a regular 3-d grid of cells, called voxels. A voxel stores two kinds of state information: (i) the probability that the voxel contains a surface element and (ii) a mixture of Gaussians that models the surface appearance of the voxel as learned from a sequence of images. The surface probability is updated by incremental Bayesian learning , where the probability of a voxel containing a surface element after N+1 images increases if the Gaussian mixture at that voxel explains the intensity observed in the N+1 image better than any other voxelalong the projection ray. In a fixed-grid voxel representation, most of the voxels may correspond to empty areas of a scene, making storage of...



High Resolution Surface Reconstruction from Aerial Images

This project presents a novel framework for surface reconstruction from multi-view aerial imagery of large scale urban scenes, which combines probabilistic volumetric modeling with smooth signed distance surface estimation, to produce very detailed and accurate surfaces. Using a continuous probabilistic volumetric model which allows for explicit representation of ambiguities caused by moving objects, reflective surfaces, areas of constant appearance, and self-occlusions, the algorithm learns the geometry and appearance of a scene from a calibrated image sequence. An online implementation of Bayesian learning precess in GPUs significantly reduces the time required to process a large number of images. The probabilistic volumetric model of occupancy is subsequently used to estimate a smooth approximation of the signed distance function to the surface. This step, which reduces to the solution of a sparse linear system, is very efficient and scalable to large data sets. The proposed...



From Multi-view Image Curves to 3D Drawings

The goal of this project is to generate clean, accurate and reliable 3D drawings from multiview image data. Our output representation is a 3D graph representing the geometry and the organization of the scene, and can be regarded as a 3D version of architectural drawings and blueprints.


The input can be a sequence of either video frames or discrete images. If the calibration is not already available, we start by calibrating the...



Multiview Geometry Reconstruction and Calibration
3D Surface Representation, Design, and Scanning


Registration of PVM

This work studies the quality of probabilistic model registration using feature-matching techniques based on the the FPFH and the SHOT descriptors. Furthermore, the quality of the underlying geometry, and therefore the effectiveness of the descriptors for matching purposes, is affected by variations in the conditions of the data collection. A major contribution of this work is to evaluate the quality of feature-based registration of PVM models under different scenarios that reflect the kind of variability observed across collections from different times instances. More precisely, this work investigates variability in terms of model discretization, resolution and sampling density, errors in the camera orientation, and changes illumination and geographic characteristics.  A corresponding manuscript is under preparation.