Multiple Anatomical Structure Recognition in Fetal Ultrasound Images

In this report I develop and evaluate custom machine learning convolutional neural networks (CNNs) for classifying anatomical structures in fetal ultrasound (US) images. The study aims to assist sonographers by automating the recognition of key anatomical regions: head, heart, abdomen, and others.

A CNN was trained using a labeled dataset of 266 subjects. To improve classification performance and address data imbalance, I used elastic deformation for data augmentation. Two models were tested: a multiclass CNN using softmax activation, and a one-vs-rest CNN using sigmoid functions.
Hyperparameter optimization through grid search identified the optimal CNN architecture, achieving a testing accuracy of 87.81±1.96%, outperforming a simpler base model. Data augmentation and dropout regularization were shown to significantly improve model performance.

While the multiclass model performed slightly better overall, the one-vs-rest model offered comparable F1 scores but was more computationally demanding. Visualization techniques such as confusion matrices and t-SNE plots were used to understand misclassifications, notably between the heart and abdomen classes.