Present thesis presents a prototype optical virtual studio system. In particular, a methodology for the construction of a two tone blue screen is presented, where the latter includes the appropriate features for the estimation of a camera's 3D motion parameters, without the use of any electromechanical equipment. In parallel, an ecient solution for the calculation of the camera's motion is introduced, based on line correspondences. The results of simulations, real experiments and error analysis prove the robustness of the system. In this way, mixed reality sequences are generated, without posing constraints on the object's, actor's, or camera's movement. The next step deals with the localization and tracking, over time, of moving or statical objects on the set. A simple method is presented that ensures tracking in real time, taking advantage of the properties of the prototype virtual studio, and allows for the automatic insertion of virtual objects and actors in the scene. Finally, in an attempt to further enhance the generated sequence, solutions concerning the communication and interaction between virtual and human actors are examined. A methodology is presented for the extraction of human facial features in order to extract indications concerning the actor's behavior, which in turn will provide the basis for the system's feedback. The dissertation concludes, presenting and discussing simulations performed under a general scenario of an augmented virtual studio.
|