AI Models

AI 2D TAG Models
If you are familiar with AI, then you probably already know the U-NET model and most of its variants.

If you are not however, you are face with a choice of different "flavors" of the U-NET model and have to decide which is the best for you.

This is not an easy choice! They are all fairly similar and offer only slight improvements to the basic U-NET model. Of course when you read the original papers for each of these variants, they are all presented as improvements on the basic model, but how good are they?

To help you decide, I tested all the default models in the "AI 2D TAG" module with the same database (100 MR slices of the abdomen) using 4 classes (back-ground, muscle, sub-cutaneous fat and intra abdominal fat). I used 70 slices for training and 30 slices for validation and let the training run for 500 epochs.

The "Time to train" is the time it took to train the AI on a computer with an Nvidia RTX-3090 GPU.

All 6 default models that I provide in sliceO seem to converge to slightly equivalent solutions inside 300 epochs.

For a description of some of the U-NET variations, I would suggest reading this article: "U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications"

Model Publication Nb of Parameters Time to train Mean DICE coeff. Loss Plot Metrics per class
U-NET 2015 7,760,644 11min 26sec 0.9055 Loss (U-NET) Loss (U-NET)
U-NET Light 2015 2,054,916 9min 33sec 0.8656 Loss (U-NET Light) Loss (U-NET Light)
Residual U-NET 2017 8,222,724 22min 18sec 0.9085 Loss (Residual U-NET) Loss (Residual U-NET)
Att U-NET 2018 8,902,376 15min 27sec 0.9146 Loss (Att U-NET) Loss (Att U-NET)
Dense U-NET 2019 13,989,796 15min 41sec 0.9110 Loss (Dense U-NET) Loss (Dense U-NET)
Att Res U-NET 2021 8,946,856 16min 13sec 0.9154 Loss (Att Res U-NET) Loss (Att Res U-NET)

What we can conclude from this is that the most recent model the "Attention Residual U-NET" is probably the best for MR and CT segmentation.

One of the model "U-NET Light" is implementation of the standard U-NET but using a lot less trainable parameters, It does give slightly less accurate results, but it can run on a GPU with limited amount of GPU RAM memory. so if you have a GPU with a small amount of RAM, it might be a good compromise.


Scroll
to Top