Omni-directional imaging records the visual information from any direction with respect to a given view-point. It is gaining consumers’ popularity due to fast spreading of low-cost devices both for acquisition and rendering. The possibility to render the whole surrounding space represents a further step towards immersivity, thus providing the user with the illusion of physically being in a virtual environment. The understanding of visual attention mechanisms for these images is a relevant topic for processing, coding, and exploiting such data. In this contribution, a saliency model for omni-directional images is presented. It is based on the combination of low-level and semantic features. The first ones account for texture, viewport saliency, hue and saturation, while the second are used to take into account the impact of the presence of human subjects on the saliency. The proposed model has been tested in the “Salient360! Visual attention modeling for 360°View the MathML sourceImages” Grand Challenge. The model, the achieved results, and finding/discussions are here presented.

Battisti, F., Baldoni, S., Brizzi, M., Carli, M. (2018). A feature-based approach for saliency estimation of omni-directional images. SIGNAL PROCESSING-IMAGE COMMUNICATION, 69, 53-59 [10.1016/j.image.2018.03.008].

A feature-based approach for saliency estimation of omni-directional images

Battisti Federica
;
Baldoni, Sara;BRIZZI, MICHELE;Carli Marco
2018-01-01

Abstract

Omni-directional imaging records the visual information from any direction with respect to a given view-point. It is gaining consumers’ popularity due to fast spreading of low-cost devices both for acquisition and rendering. The possibility to render the whole surrounding space represents a further step towards immersivity, thus providing the user with the illusion of physically being in a virtual environment. The understanding of visual attention mechanisms for these images is a relevant topic for processing, coding, and exploiting such data. In this contribution, a saliency model for omni-directional images is presented. It is based on the combination of low-level and semantic features. The first ones account for texture, viewport saliency, hue and saturation, while the second are used to take into account the impact of the presence of human subjects on the saliency. The proposed model has been tested in the “Salient360! Visual attention modeling for 360°View the MathML sourceImages” Grand Challenge. The model, the achieved results, and finding/discussions are here presented.
2018
Battisti, F., Baldoni, S., Brizzi, M., Carli, M. (2018). A feature-based approach for saliency estimation of omni-directional images. SIGNAL PROCESSING-IMAGE COMMUNICATION, 69, 53-59 [10.1016/j.image.2018.03.008].
File in questo prodotto:
File Dimensione Formato  
IMAGE 15351.pdf

accesso aperto

Descrizione: Articolo in press
Tipologia: Documento in Post-print
Dimensione 3.42 MB
Formato Adobe PDF
3.42 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/332488
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 42
  • ???jsp.display-item.citation.isi??? 32
social impact