In the technical study of easel paintings, there is a long tradition of using a variety of imaging techniques to reveal a range of information to further understanding of the artworks.
The imaging techniques used range from visible images taken under different lighting conditions or magnifications, images taken using different forms of radiation e.g. infrared reflectograms and X-radiographs, to images derived from datacubes generated using new spectroscopic imaging techniques such as macro X-ray fluorescence scanning (MA-XRF) and hyperspectral imaging (his).
However, in order to extract as much information as possible from these often very large images (or datasets) – whether by visual inspection or using advanced signal processing approaches – an essential first step is to accurately align the images. In image processing, the task of matching images taken, for example, at different times, from different sensors, or from different viewpoints, is known as registration. Image registration is not a problem unique to the cultural heritage sector but is also an issue in medical imaging, remote sensing, computer vision, etc. but becomes more relevant and difficult to address when trying to register multimodal images, that is images collected with different imaging techniques.
This project aims to facilitate the processing and interpretation of multimodal datasets from paintings by developing new registration methods that can automatically extract features common to different modalities and which are resilient to variation in spatial resolution as well as other form of inconsistencies and to use them for rigid or semi-rigid registration.