Mode
Text Size
Log in / Sign up

Classification framework shows 79% to 98% performance for oral squamous cell carcinoma on histopathological images

Classification framework shows 79% to 98% performance for oral squamous cell carcinoma on histopatho…
Photo by Logan Voss / Unsplash
Key Takeaway
Consider classification framework's performance in oral squamous cell carcinoma images, but note staining variation limits.

This computer-aided classification study assessed a framework combining staining-bias suppression and structured multiple-instance aggregation for classifying oral squamous cell carcinoma from histopathological images. The population included images from one test set, another test set, and an external validation on an independent retrospective clinical cohort from a local hospital, with sample size not reported. The comparator was traditional and deep learning baselines, and the primary outcome was classification performance measured by Accuracy, F1 score, and AUC.

Main results showed Accuracy of 87.35% on one test set and 79.34% on another test set, F1 scores of 91.27% and 86.86%, and AUC values of 98.04% and 90.74%, respectively. No p-values, confidence intervals, or effect sizes were reported. Safety and tolerability data were not provided, as this was a computational study without patient interventions.

Key limitations include that staining variations and sparse local lesions can cause models to overfit color differences and weaken cross-domain generalization. The study has practical value under real-world acquisition and staining variations, but follow-up duration was not reported. Clinicians should interpret these findings cautiously due to the observational nature and potential for overfitting in varied staining conditions.

Study Details

Study typeCohort
EvidenceLevel 3
PublishedApr 2026
View Original Abstract ↓
IntroductionOral squamous cell carcinoma histopathological image classification is often challenged by staining variations and sparse local lesions, which can cause models to overfit color differences and weaken cross-domain generalization.MethodsA classification framework combining staining-bias suppression and structured multiple-instance aggregation was developed. In representation learning, stain-related features were disentangled from morphological and structural information, and a gated suppression mechanism was introduced to reduce color interference while enhancing tissue architecture and cellular morphology cues. In decision aggregation, image patches were treated as instances and spatial priors were incorporated to capture both neighborhood continuity and long-range dependencies.ResultsThe proposed method achieved Acc 87.35%, F1 91.27%, and AUC 98.04% on one test set, and Acc 79.34%, F1 86.86%, and AUC 90.74% on another test set. It consistently outperformed traditional and deep learning baselines. External validation on an independent retrospective clinical cohort from a local hospital also showed stable performance.DiscussionThe results indicate that the proposed method can effectively alleviate the impact of staining bias and improve classification robustness. Its strong performance on external data further supports its practical value under real-world acquisition and staining variations.
Free Newsletter

Clinical research that matters. Delivered to your inbox.

Join thousands of clinicians and researchers. No spam, unsubscribe anytime.