Assessing the quality of images is a challenging task. To achieve this goal, images must be evaluated by a pool of subjects following a well-defined protocol or an objective quality metric must be defined. In this work, an objective quality metric based on deep neural network is proposed. The metric takes into account the human vision system by computing the saliency map and natural scene statistics features of the image under test. The neural network is composed by two modules: the convolutional layers and the regression units. The first one is trained by using preprocessed distorted images. The feature weights of the first module are smoothed by exploiting the estimated saliency map. The latter module is fit with the ground truth quality scores of the input image and the scaled feature weights obtained from first module by using visual sensitivity factor of image obtained using natural scene statistics features. The performances of the proposed metric have been evaluated by using four datasets: LIVEIQA, TID2013, CSIQ, and KADID10K. The achieved results show the effectiveness of the proposed system in closely matching the predicted quality scores with the ground truth ones.

Lamichhane, K., Carli, M., Battisti, F. (2023). A CNN-based no reference image quality metric exploiting content saliency. SIGNAL PROCESSING-IMAGE COMMUNICATION, 111, 116899 [10.1016/j.image.2022.116899].

A CNN-based no reference image quality metric exploiting content saliency

Lamichhane, Kamal;Carli, Marco;
2023-01-01

Abstract

Assessing the quality of images is a challenging task. To achieve this goal, images must be evaluated by a pool of subjects following a well-defined protocol or an objective quality metric must be defined. In this work, an objective quality metric based on deep neural network is proposed. The metric takes into account the human vision system by computing the saliency map and natural scene statistics features of the image under test. The neural network is composed by two modules: the convolutional layers and the regression units. The first one is trained by using preprocessed distorted images. The feature weights of the first module are smoothed by exploiting the estimated saliency map. The latter module is fit with the ground truth quality scores of the input image and the scaled feature weights obtained from first module by using visual sensitivity factor of image obtained using natural scene statistics features. The performances of the proposed metric have been evaluated by using four datasets: LIVEIQA, TID2013, CSIQ, and KADID10K. The achieved results show the effectiveness of the proposed system in closely matching the predicted quality scores with the ground truth ones.
2023
Lamichhane, K., Carli, M., Battisti, F. (2023). A CNN-based no reference image quality metric exploiting content saliency. SIGNAL PROCESSING-IMAGE COMMUNICATION, 111, 116899 [10.1016/j.image.2022.116899].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/424316
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact