zea.models.lpips¶
LPIPS model for perceptual similarity.
To try this model, simply load one of the available presets:
>>> from zea.models.lpips import LPIPS
>>> model = LPIPS.from_preset("lpips")
Important
This is a zea implementation of the model.
For the original paper and code, see here.
Zhang, Richard, et al. “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric.” https://arxiv.org/abs/1801.03924
See also
A tutorial notebook where this model is used: LPIPS: perceptual similarity for ultrasound images.
Functions
Get the linear head model for LPIPS. |
|
Get the VGG16 model for perceptual loss. |
Classes
|
Learned Perceptual Image Patch Similarity (LPIPS) metric. |
- class zea.models.lpips.LPIPS(*args, **kwargs)[source]¶
Bases:
BaseModelLearned Perceptual Image Patch Similarity (LPIPS) metric.
Initialize the LPIPS model.
- Exported weights using:
https://github.com/moono/lpips-tf2.x/blob/master/example_export_script/convert_to_tensorflow.py
- Parameters:
net_type (str, optional) – Type of network to use. Defaults to “vgg”.
disable_checks (bool, optional) – Disable input checks. This is useful to allow tensorflow graph mode. Defaults to False.
- call(inputs)[source]¶
Compute the LPIPS metric.
- Parameters:
inputs (list) – List of two input images of shape [B, H, W, C] or [H, W, C]. Images should be in the range [-1, 1].
- Returns:
- LPIPS distance between the two images
of shape [B, ] or scalar if no batch dimension.
- Return type:
Tensor
- static preprocess_input(image)[source]¶
Preprocess the input images
- Parameters:
image (Tensor) – Input image tensor of shape [H, W, C] with optional batch dimension and values in the range [-1, 1].
- Returns:
- Preprocessed image tensor of shape [B, H, W, C]
and standardized values for VGG model.
- Return type:
Tensor