01
Introduction
Severe burns can threaten vital skin functions, and timely, accurate diagnosis is critical. Many regions face a shortage of specialists capable of assessing burn depth effectively, which can delay treatment and increase complications.
Goal
This project uses machine learning and image processing to detect burns, classify severity, and estimate affected area from user-submitted images. A simple web interface demonstrated the system’s ability to provide instant diagnostic feedback, supporting faster, more consistent care, especially in low-resource settings.
02
Research
Before developing our model, we reviewed multiple research studies on burn wound detection, segmentation, and classification. These studies provided valuable insights into existing techniques and guided the design choices for our system. We also explored multiple datasets and, using various collection and preprocessing techniques, compiled a total of 5,676 images to train and evaluate our models effectively.
03
Design
This system is designed to classify burn images through an automated, end-to-end workflow. Users upload images with consent, after which the system removes backgrounds, extracts burn areas, and classifies severity using a RegNetY-080 deep learning model.

It is built for usability, speed, and accuracy, with scalable batch processing, secure handling of user data, and cross-platform accessibility.
04
Software
For this project, we used a variety of software and libraries to support development, model training, and deployment. Python served as the primary programming language, with PyTorch for building and training neural networks, and XGBoost and LightGBM for gradient boosting models. OpenCV was utilized for image processing and manipulation, while Flask was used to develop the web application interface for model deployment. These tools provided a robust and flexible environment for both experimentation and practical implementation.
05
Technical
Preprocessing and Burn Region Isolation The system for automated skin burn classification uses a multi-stage pipeline combining background removal, burn region isolation, and deep learning-based classification. Images are preprocessed using models like RemBG and U-2-Net for accurate background removal, with a quality assessment framework selecting the best output per image. Targeted preprocessing, including YCbCr and HSV transformations along with Fuzzy C-Means clustering, isolates burn areas from healthy skin, producing standardized images for the classification stage.

Classification is performed using a custom-trained RegNetY-080 model, which incorporates Squeeze-and-Excitation blocks to focus on informative features. The model is trained from scratch using PyTorch with data augmentation, class balancing, and a cosine annealing scheduler for optimized convergence. This approach achieves high accuracy (94.1%) and robust performance across burn severity classes, delivering reliable predictions directly from preprocessed images.

To further improve performance, an ensemble learning strategy was implemented, combining KNN, Random Forest, SVM, XGBoost, and LightGBM with optimized weights. A two-stage hierarchical classifier refines borderline cases between first- and second-degree burns using a CatBoost model. Evaluated with GroupKFold cross-validation, the ensemble approach achieves strong generalization (82.7% accuracy), particularly enhancing detection of severe burns while maintaining overall robustness and stability.
06
Conclusion
We developed an automated system that classifies burn severity and estimates affected area from images using machine learning. A custom dataset of 5,676 images and ensemble learning enabled 94% accuracy.

