Mode
Text Size
Log in / Sign up

Deep learning model segments nasal cavity from CT scans with high accuracy in preclinical evaluation.

Deep learning model segments nasal cavity from CT scans with high accuracy in preclinical evaluation…
Photo by Navy Medicine / Unsplash
Key Takeaway
Consider this preclinical segmentation model for surgical planning, but recognize results are from one dataset and need external validation.

This is a preclinical evaluation of a deep learning architecture called AFS-DSN (Adaptive Frequency-Spatial Dual-Stream Network) for binary segmentation of the nasal cavity complex from CT scans. The study used 130 CT volumes from the NasalSeg dataset, with a 70/15/15 train/validation/test split, and compared the model to a baseline segmentation method.

The authors report that the AFS-DSN model achieved an overall mean Dice coefficient of 94.34% (SD 2.30%) for segmentation accuracy. In thin-wall regions, the Dice coefficient was 91.34% compared to 90.57% for the baseline (p = 0.004). The Surface Dice at 1 mm tolerance was 0.874 versus 0.868 for the baseline (p = 0.010). A lighter version (AFS-DSN-Lite) with 27.41M parameters showed comparable performance with a Dice coefficient of 94.37%. A 3-fold cross-validation yielded a mean Dice of 94.59% (SD 0.31%), suggesting robustness.

The authors note that this is a preclinical study using a single dataset, and results may not generalize to other populations or settings. No safety data or adverse events were reported, as this is not applicable to a preclinical segmentation study. The practice relevance is noted as suitable for surgical planning applications where sub-millimeter accuracy is clinically relevant, but this is a preclinical finding that requires further validation.

Study Details

EvidenceLevel 5
PublishedApr 2026
View Original Abstract ↓
Accurate segmentation of nasal and paranasal sinus structures from CT scans is critical for surgical planning and treatment evaluation in rhinology. However, the complex anatomical topology and thin-wall boundaries of these structures pose significant challenges for automated segmentation methods. We propose AFS-DSN (Adaptive Frequency-Spatial Dual-Stream Network), a novel deep learning architecture that integrates multi-scale wavelet decomposition with spatial feature learning for binary segmentation of the nasal cavity complex. Our method employs a dual-stream encoder with a frequency branch utilizing three wavelet scales (db1, db2, db4) to capture 24 frequency sub-bands, enabling enhanced boundary detection in anatomically challenging regions. Cross-domain attention and adaptive routing mechanisms dynamically fuse spatial and frequency features based on local tissue characteristics. We formulate the task as binary segmentation where all five anatomical structures (maxillary sinus, sphenoid sinus, ethmoid sinus, frontal sinus, and nasal cavity) are treated as a unified foreground region against the background, prioritizing clinical boundary detection over individual structure differentiation. Evaluated on the NasalSeg dataset (130 CT volumes) with a 70/15/15 train/validation/test split, AFS-DSN achieves 94.34% (mean Dice, SD 2.30%) overall Dice coefficient with statistically significant improvements in thin-wall regions (91.34% vs. 90.57% baseline, p = 0.004) and statistically significant improvement in Surface Dice at 1 mm tolerance (0.874 vs. 0.868 baseline, p = 0.010), demonstrating enhanced boundary precision while maintaining sub-second inference time, making the method suitable for surgical planning applications where sub-millimeter accuracy is clinically relevant. To address concerns regarding model complexity, we further introduce AFS-DSN-Lite, a parameter-efficient variant (27.41M parameters) that achieves comparable performance (94.37% Dice) through depthwise separable convolutions, and we validate robustness via 3-fold cross-validation (mean Dice: 94.59%, SD 0.31%).
Free Newsletter

Clinical research that matters. Delivered to your inbox.

Join thousands of clinicians and researchers. No spam, unsubscribe anytime.