Models¶
Collection of (generative) models for ultrasound imaging.
zea contains a collection of models for various tasks, all located in the zea.models package.
See the following dropdown for a list of available models:
Available models
zea.models.echonet.EchoNetDynamic: A model for left ventricle segmentation.zea.models.carotid_segmenter.CarotidSegmenter: A model for carotid artery segmentation.zea.models.echonetlvh.EchoNetLVH: A model for left ventricle hypertrophy segmentation.zea.models.unet.UNet: A simple U-Net implementation.zea.models.lpips.LPIPS: A model implementing the perceptual similarity metric.zea.models.taesd.TinyAutoencoder: A tiny autoencoder model for image compression.zea.models.regional_quality.MobileNetv2RegionalQuality: A scoring model for myocardial regions in apical views.zea.models.lv_segmentation.AugmentedCamusSeg: A nnU-Net based left ventricle and myocardium segmentation model.
Presets for these models can be found in zea.models.presets.
To use these models, you can import them directly from the zea.models module and load the pretrained weights using the from_preset() method. For example:
>>> from zea.models.unet import UNet
>>> model = UNet.from_preset("unet-echonet-inpainter")
You can list all available presets using the presets attribute:
>>> from zea.models.unet import UNet
>>> presets = list(UNet.presets.keys())
>>> print(f"Available built-in zea presets for UNet: {presets}")
Available built-in zea presets for UNet: ['unet-echonet-inpainter']
zea generative models¶
In addition to models, zea provides both classical and deep generative models for tasks such as image generation, inpainting, and denoising. These models inherit from zea.models.generative.GenerativeModel or zea.models.deepgenerative.DeepGenerativeModel.
Typically, these models have some additional methods, such as:
fit()for training the model on datasample()for generating new samples from the learned distributionposterior_sample()for drawing samples from the posterior given measurementslog_density()for computing the log-probability of data under the model
See the following dropdown for a list of available generative models:
Available models
zea.models.diffusion.DiffusionModel: A deep generative diffusion model for ultrasound image generation.zea.models.gmm.GaussianMixtureModel: A Gaussian Mixture Model.zea.models.hvae.HierarchicalVAE: A hierarchical variational autoencoder for ultrasound image generation.
An example of how to use the zea.models.diffusion.DiffusionModel is shown below:
>>> from zea.models.diffusion import DiffusionModel
>>> model = DiffusionModel.from_preset("diffusion-echonet-dynamic")
>>> samples = model.sample(n_samples=4)
Contributing and adding new models¶
Please follow the guidelines in the Contributing page if you would like to contribute a new model to zea.
The following steps are recommended when adding a new model:
Create a new module in the
zea.modelspackage for your model:zea.models.mymodel.Add a model class that inherits from
zea.models.base.Model. For generative models, inherit fromzea.models.generative.GenerativeModelorzea.models.deepgenerative.DeepGenerativeModelas appropriate. Make sure you implement thecall()method.Upload the pretrained model weights to our Hugging Face. Should be a
config.jsonand amodel.weights.h5file. See Keras documentation how those can be saved from your model. Simply drag and drop the files to the Hugging Face website to upload them.Tip
It is recommended to use the mentioned saving procedure. However, alternate saving methods are also possible, see the
zea.models.echonet.EchoNetmodule for an example. You do now have to implement acustom_load_weights()method in your model class.Add a preset for the model in
zea.models.presets. This basically allows you to have multiple weights presets for a given model architecture.Make sure to register the presets in your model module by importing the presets module and calling
register_presetswith the model class as an argument.