zea.models.echonetlvh

EchoNetLVH model for segmentation of PLAX view cardiac ultrasound.

To try this model, simply load one of the available presets:

>>> from zea.models.echonetlvh import EchoNetLVH

>>> model = EchoNetLVH.from_preset("echonetlvh")

Important

This is a zea implementation of the model. For the original paper and code, see here.

Duffy, Grant, et al. “High-throughput precision phenotyping of left ventricular hypertrophy with cardiovascular deep learning.” JAMA cardiology 7.4 (2022): 386-395

See also

A tutorial notebook where this model is used: Task-based transmit beamforming perception-action loop.

Classes

EchoNetLVH(*args, **kwargs)

EchoNet Left Ventricular Hypertrophy (LVH) model for echocardiogram analysis.

class zea.models.echonetlvh.EchoNetLVH(*args, **kwargs)[source]

Bases: BaseModel

EchoNet Left Ventricular Hypertrophy (LVH) model for echocardiogram analysis.

This model performs semantic segmentation on echocardiogram images to identify key anatomical landmarks for measuring left ventricular wall thickness:

  • LVPWd_1: Left Ventricular Posterior Wall point 1

  • LVPWd_2: Left Ventricular Posterior Wall point 2

  • IVSd_1: Interventricular Septum point 1

  • IVSd_2: Interventricular Septum point 2

The model outputs 4-channel logits corresponding to heatmaps for each landmark.

For more information, see the original project page: https://echonet.github.io/lvh/

Initialize the EchoNetLVH model.

Parameters:

**kwargs – Additional keyword arguments passed to BaseModel

call(inputs)[source]

Forward pass of the model.

Parameters:

inputs (Tensor) – Input images of shape [B, H, W, C]. They should be scan converted, with pixel values in range [0, 255].

Returns:

Logits of shape [B, H, W, 4] with 4 channels for each landmark

Return type:

Tensor

expected_coordinate(mask, coordinate_grid=None)[source]

Compute the expected coordinate (center-of-mass) of a heatmap.

This implements a differentiable version of taking the max of a heatmap by computing the weighted average of coordinates.

Reference: https://arxiv.org/pdf/1711.08229

Parameters:
  • mask (Tensor) – Heatmap of shape [B, H, W]

  • coordinate_grid (Tensor, optional) – Grid of coordinates. If None, uses self.coordinate_grid

Returns:

Expected coordinates of shape [B, 2] in (x, y) format

Return type:

Tensor

extract_key_points_as_indices(logits)[source]

Extract key point coordinates from logits using center-of-mass calculation.

Parameters:

logits (Tensor) – Model output logits of shape [B, H, W, 4]

Returns:

Key point coordinates of shape [B, 4, 2] where each point

is in (x, y) format

Return type:

Tensor

overlay_labels_on_image(image, label, alpha=0.5)[source]

Overlay predicted heatmaps and connecting lines on the input image.

Parameters:
  • image (Tensor) – Input image of shape [H, W] or [H, W, C]

  • label (Tensor) – Predicted logits of shape [H, W, 4]

  • alpha (float) – Blending factor for overlay (0=transparent, 1=opaque)

Returns:

Image with overlaid heatmaps and measurements of shape [H, W, 3]

Return type:

ndarray

visualize_logits(images, logits)[source]

Create visualization of model predictions overlaid on input images.

Parameters:
  • images (Tensor) – Input images of shape [B, H, W, C]

  • logits (Tensor) – Model predictions of shape [B, H, W, 4]

Returns:

Images with overlaid predictions of shape [B, H, W, 3]

Return type:

Tensor