Salience Distance Transform

The distance transform efficiently computes the distance at each pixel to the nearest feature pixel. It is a useful tool in computer vision, and has many applications in model matching, as well as extracting features such as medial axes.

A weakness of the approach is the requirement to provide a binary feature map. Thus, when applying it to edge maps for instance, this means that the edges must be thresholded. Over or under thresholding then degrades the effectiveness of the distance transform in the (e.g. model matching) application.

We have developed an improvement called the salience distance transform which avoids thresholding, and instead combines other properties of the features.

The following examples show how different thresholds produce very different results from the standard distance transform. Even without thresholding the salience distance transform reduces the effect of the low level edge clutter by incorporating edge magnitude. Further introducing edge length as a salient property further accentuates the dominant features in the scene.

the original image
edge map
log mapped (standard) distance transform of thresholded edges
log mapped (standard) distance transform of edges when thresholded at a different level
salience distance transform using edge magnitude (no thresholding)
salience distance transform using edge magnitude and edge list length

More details are given in:

You can download code to implement the salience distance transform.

return to Paul Rosin's homepage