The painters of the Byzantine and post Byzantine artworks use specific rules and iconographic patterns for the creation of sacred figures. Based on these rules, the sacred figure depicted in the artwork is recognizable. In this work, we propose an automatic knowledge-based image analysis system used for Byzantine icons classification on the basis of the sacred figure recognition.
Firstly, the system detects and analyzes the most important facial characteristics providing rich, still imprecise information about the Byzantine icon. Then, the information extracted is expressed in terms of an expressive terminology formalized using Description Logics (DLs) which form the basis of Semantic Web ontology languages.
In order to effectively handle the imprecision involved, fuzzy extensions of DLs are used for the assertional part of the ontology. In this way, the extracted by image analysis information comprises the assertional component while the expressive terminology, formalizing the rules and the iconographic patterns, permits categorization of Byzantine artworks.
We evaluated our system on a database, provided by the Mount Sinai Foundation in Greece, containing 2,000 digitized Byzantine icons dating back to the 13th century. The icons depict 50 different characters; according to Dionysios, each character has specific facial features that makes him or her distinguishable. Evaluation of the Byzantine-icon-analysis subsystem produced promising results. The subsystem’s mean response time was approximately 15 seconds on a typical PC. In the semantic-segmentation module, the face detection submodule reached 80 percent accuracy.
In most cases, the failure occurred in icons with a destroyed face area. If the submodule detected the face, it almost always detected the eyes and nose. The base-color-analysis submodule defined the color model used by the icon’s artist, which the feature extraction module later used. Using the nose’s height, which can be easily determined, and taking into account the manual’s rules, the face-component-detection submodule estimated the positions of the background and foreground pixels for every part of the face. These positions constituted the input to the graph-cut algorithm. This submodule achieved 96 percent accuracy.
The feature extraction module extracted features for every icon segment. Then, using the fuzzy partitions, the semantic-interpretation module interpreted each segment’s specific features and properties and the relationship of face parts. Thus, that module determined each feature’s degree of membership in a specific class, thereby creating an image description. To evaluate overall system performance, we used precision and recall.
Class | No. of icons in class |
No. of images classified |
No. of images correctly classified |
Precision | Recall |
Jesus | 70 | 67 | 58 | 0.87 | 0.83 |
Virgin Mary | 60 | 54 | 43 | 0.80 | 0.72 |
Peter | 50 | 44 | 38 | 0.86 | 0.76 |
Paul | 50 | 44 | 39 | 0.89 | 0.78 |
Katerina | 40 | 35 | 30 | 0.86 | 0.75 |
Ioannis | 40 | 37 | 29 | 0.78 | 0.73 |
Luke | 40 | 33 | 28 | 0.85 | 0.70 |
Andrew | 40 | 33 | 29 | 0.88 | 0.73 |
Stefanos | 30 | 27 | 24 | 0.89 | 0.80 |
Konstantinos | 40 | 36 | 30 | 0.83 | 0.75 |
Dimitrios | 40 | 36 | 31 | 0.86 | 0.78 |
Georgios | 40 | 37 | 32 | 0.86 | 0.80 |
Eleni | 40 | 37 | 31 | 0.84 | 0.78 |
Pelagia | 20 | 17 | 15 | 0.88 | 0.75 |
Nikolaos | 40 | 38 | 33 | 0.87 | 0.83 |
Basileios | 40 | 37 | 31 | 0.84 | 0.78 |
Antonios | 30 | 27 | 20 | 0.74 | 0.67 |
Eythimios | 25 | 23 | 18 | 0.78 | 0.72 |
Thomas | 35 | 32 | 26 | 0.81 | 0.74 |
Minas | 20 | 17 | 14 | 0.82 | 0.70 |