In the framework of this thesis, we present a number of new image segmentation methods based on a weighted medial axis representation. This representation expresses local contrast, region shape as well as global image structure. Starting from any image gradient or grayscale contour map, we first compute a weighted distance map and its weighted medial axis by a linear-time process. Applying the same distance propagation from the medial axis back towards region boundaries, we dually obtain an initial image partition and a graph representing image structure. This procedure is equivalent to the watershed transform of the weighted distance map, hence is both topological and contrast-weighted. However, it is computationally more efficient because we first decompose the medial axis and then use our linear-time process to propagate on the remaining image surface. Using a disjoint-set data structure, we then merge adjacent regions according to a number of criteria. This framework has been successfully applied to local feature detection, and in this thesis we investigate criteria to merge adjacent regions that are appropriate for the problem image segmentation.
The first proposed technique is to merge regions according to the saddle point height of the medial axis. A second approach is to focus on the boundary gaps and merge according to how fragmented regions are. An additional direction we experiment on, is the use of ultrametric contour map representation to implement hierarchical segmentation. As inter-region ultrametric dissimilarities, we use the mean boundary strength on the common boundary between adjacent regions and a new inter-region fragmentation dissimilarity. All alternatives are evaluated using different input grayscale contour maps on the Berkeley segmentation dataset and compared to a number of state of the art algorithms. We achieve performance near the state of the art with very practical running times.