Accurate segmentation of lung tumours in computed tomography (CT) images is a fundamental task for diagnosis, treatment planning, and disease monitoring, yet it remains challenging in routine clinical practice. Manual delineation is time-consuming and affected by inter-observer variability, while many deep learning–based segmentation approaches reported in the literature show limited robustness when applied to heterogeneous real-world clinical data acquired under different imaging conditions. This thesis proposes and rigorously evaluates a complete processing chain for automatic lung tumour segmentation in CT scans, based on the integration of a dedicated preprocessing stage and established deep learning segmentation models. The contribution of this work lies in the systematic design, motivation, and quantitative assessment of preprocessing and post-inference strategies that enhance segmentation robustness, generalization, and interpretability within deep learning–based tumour segmentation frameworks. The proposed methodology includes a tumour-centric preprocessing pipeline designed to address key limitations of state-of-the-art approaches, such as intensity variability, low contrast at tumour boundaries, voxel misalignment, and class imbalance. The impact of these preprocessing choices on the segmentation performance of deep learning models is explicitly analyzed and compared under different configurations. In addition, an explainability-driven post-inference analysis framework based on Gradient-weighted Class Activation Mapping (Grad-CAM) is introduced to support model interpretation and reduce false-positive detections without modifying or retraining the segmentation networks. The processing chain is validated using two independent lung CT datasets. A large real-world clinical dataset comprising more than 5,000 scans is used to train and evaluate the models under controlled preprocessing conditions, while a second dataset is employed to assess cross-dataset generalization. Multiple convolutional neural network architectures, including U-Net, a fully convolutional network, and a lightweight custom model, are evaluated using standard quantitative metrics and qualitative analysis, enabling a structured comparison with respect to robustness and computational efficiency. The experimental results demonstrate that the proposed preprocessing and post-inference design choices play a critical role in determining segmentation accuracy and stability across heterogeneous datasets. Overall, this thesis provides a well-motivated and systematically evaluated deep learning–based segmentation framework, highlighting the importance of preprocessing design and explainability in achieving reliable performance in real clinical environments.

Seitaj, O. (2026). PROCESSING AND CLASSIFICATION OF MEDICAL IMAGES FOR THE AUTOMATIC RECOGNITION OF PATHOLOGIES.

PROCESSING AND CLASSIFICATION OF MEDICAL IMAGES FOR THE AUTOMATIC RECOGNITION OF PATHOLOGIES

Oltiana Seitaj
2026-05-12

Abstract

Accurate segmentation of lung tumours in computed tomography (CT) images is a fundamental task for diagnosis, treatment planning, and disease monitoring, yet it remains challenging in routine clinical practice. Manual delineation is time-consuming and affected by inter-observer variability, while many deep learning–based segmentation approaches reported in the literature show limited robustness when applied to heterogeneous real-world clinical data acquired under different imaging conditions. This thesis proposes and rigorously evaluates a complete processing chain for automatic lung tumour segmentation in CT scans, based on the integration of a dedicated preprocessing stage and established deep learning segmentation models. The contribution of this work lies in the systematic design, motivation, and quantitative assessment of preprocessing and post-inference strategies that enhance segmentation robustness, generalization, and interpretability within deep learning–based tumour segmentation frameworks. The proposed methodology includes a tumour-centric preprocessing pipeline designed to address key limitations of state-of-the-art approaches, such as intensity variability, low contrast at tumour boundaries, voxel misalignment, and class imbalance. The impact of these preprocessing choices on the segmentation performance of deep learning models is explicitly analyzed and compared under different configurations. In addition, an explainability-driven post-inference analysis framework based on Gradient-weighted Class Activation Mapping (Grad-CAM) is introduced to support model interpretation and reduce false-positive detections without modifying or retraining the segmentation networks. The processing chain is validated using two independent lung CT datasets. A large real-world clinical dataset comprising more than 5,000 scans is used to train and evaluate the models under controlled preprocessing conditions, while a second dataset is employed to assess cross-dataset generalization. Multiple convolutional neural network architectures, including U-Net, a fully convolutional network, and a lightweight custom model, are evaluated using standard quantitative metrics and qualitative analysis, enabling a structured comparison with respect to robustness and computational efficiency. The experimental results demonstrate that the proposed preprocessing and post-inference design choices play a critical role in determining segmentation accuracy and stability across heterogeneous datasets. Overall, this thesis provides a well-motivated and systematically evaluated deep learning–based segmentation framework, highlighting the importance of preprocessing design and explainability in achieving reliable performance in real clinical environments.
12-mag-2026
37
ELETTRONICA APPLICATA
Lung Tumour Segmentation; Deep Learning; Preprocessing Dataset; Medical Image Analysis ; Intensity Normalization;
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/540736
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact