zea.ops

Operations and Pipelines for ultrasound data processing.

This module contains two important classes, Operation and Pipeline, which are used to process ultrasound data. A pipeline is a sequence of operations that are applied to the data in a specific order.

We implement a range of common operations for ultrasound data processing (zea.ops.ultrasound), but also support a variety of basic tensor operations (zea.ops.tensor). Lastly, all existing Keras operations (see Keras Ops API) are available as zea operations as well (see zea.ops.keras_ops).

Stand-alone manual usage

Operations can be run on their own:

Examples

>>> import numpy as np
>>> from zea.ops import EnvelopeDetect
>>> data = np.random.randn(2000, 128, 1)
>>> # static arguments are passed in the constructor
>>> envelope_detect = EnvelopeDetect(axis=-1)
>>> # other parameters can be passed here along with the data
>>> envelope_data = envelope_detect(data=data)

Using a pipeline

You can initialize with a default pipeline or create your own custom pipeline.

>>> from zea.ops import Pipeline, EnvelopeDetect, Normalize, LogCompress
>>> pipeline = Pipeline.from_default()

>>> operations = [
...     EnvelopeDetect(),
...     Normalize(),
...     LogCompress(),
... ]
>>> pipeline_custom = Pipeline(operations)

One can also load a pipeline from a config or yaml/json file:

>>> from zea import Pipeline

>>> # From JSON string
>>> json_string = '{"pipeline": {"operations": ["identity"]}}'
>>> pipeline = Pipeline.from_json(json_string)

>>> # from yaml file
>>> import yaml
>>> from zea import Config
>>> # Create a sample pipeline YAML file
>>> pipeline_dict = {
...     "pipeline": {
...         "operations": [
...             {"name": "identity"},
...         ],
...     }
... }
>>> with open("pipeline.yaml", "w") as f:
...     yaml.dump(pipeline_dict, f)
>>> yaml_file = "pipeline.yaml"
>>> pipeline = Pipeline.from_path(yaml_file)

Example of a yaml file:

pipeline:
  operations:
    - name: demodulate
    - name: beamform
      params:
        type: das
        pfield: false
        num_patches: 100
    - name: envelope_detect
    - name: normalize
    - name: log_compress

Functions

get_ops(ops_name)

Get the operation from the registry.

Classes

Identity([input_data_type, ...])

Identity operation.

Lambda(*args, **kwargs)

Use any function as an operation.

Mean(*args, **kwargs)

Take the mean of the input data along a specific axis.

Operation([input_data_type, ...])

A base abstract class for operations in the pipeline with caching functionality.

DelayAndSum(*args, **kwargs)

Sums time-delayed signals along channels and transmits.

DelayMultiplyAndSum(*args, **kwargs)

Performs the operations for the Delay-Multiply-and-Sum beamformer except the delay.

Beamform([beamformer, num_patches, ...])

Classical beamforming pipeline for ultrasound image formation.

Map(operations, argnames[, in_axes, ...])

A pipeline that maps its operations over specified input arguments.

PatchedGrid(*args[, num_patches])

A pipeline that maps its operations over flatgrid and flat_pfield keys.

Pipeline(operations[, with_batch_dim, ...])

Pipeline class for processing ultrasound data through a series of operations.

GaussianBlur(sigma[, order, mode, cval, ...])

GaussianBlur is an operation that applies a Gaussian blur to an input image.

Normalize(*args, **kwargs)

Normalize data to a given range.

Pad(target_shape[, uniform, axis, ...])

Pad layer for padding tensors to a specified shape.

Threshold(*args, **kwargs)

Threshold an array, setting values below/above a threshold to a fill value.

AnisotropicDiffusion([input_data_type, ...])

Speckle Reducing Anisotropic Diffusion (SRAD) filter.

ApplyWindow(*args, **kwargs)

Apply a window function to the input data along a specific axis.

BandPassFilter([axis, num_taps, filter_key])

Apply a band-pass FIR filter to the real input signal using convolution.

ChannelsToComplex([input_data_type, ...])

Companding(*args, **kwargs)

Companding according to the A- or μ-law algorithm.

ComplexToChannels(*args, **kwargs)

Demodulate(*args, **kwargs)

Demodulates the input data to baseband.

Downsample([factor, phase, axis])

Downsample data along a specific axis.

EnvelopeDetect(*args, **kwargs)

Envelope detection of RF signals.

FirFilter(axis[, complex_channels, filter_key])

Apply a FIR filter to the input signal using convolution.

LeeFilter(sigma[, mode, cval, truncate, axes])

The Lee filter is a speckle reduction filter commonly used in synthetic aperture radar (SAR) and ultrasound image processing.

LogCompress([clip])

Logarithmic compression of data.

LowPassFilterIQ([axis, num_taps, filter_key])

Apply a low-pass FIR filter to the demodulated IQ (n_ch=2) input signal using convolution.

PfieldWeighting(*args, **kwargs)

Weighting aligned data with the pressure field.

ReshapeGrid(*args, **kwargs)

Reshape flat grid data to grid shape.

ScanConvert(*args, **kwargs)

Scan convert images to cartesian coordinates.

Simulate(*args, **kwargs)

Simulate RF data.

TOFCorrection(*args, **kwargs)

Time-of-flight correction operation for ultrasound data.

UpMix(*args, **kwargs)

Upmix IQ data to RF data.

CommonMidpointPhaseError([input_data_type, ...])

Calculates the Common Midpoint Phase Error (CMPE)

class zea.ops.AnisotropicDiffusion(input_data_type=None, output_data_type=None, key='data', output_key=None, cache_inputs=False, cache_outputs=False, jit_compile=True, with_batch_dim=True, jit_kwargs=None, jittable=True, additional_output_keys=None, **kwargs)[source]

Bases: Operation

Speckle Reducing Anisotropic Diffusion (SRAD) filter.

Reference: - https://www.researchgate.net/publication/5602035_Speckle_reducing_anisotropic_diffusion - https://nl.mathworks.com/matlabcentral/fileexchange/54044-image-despeckle-filtering-toolbox

Parameters:
  • input_data_type (Optional[DataTypes]) – The data type of the input data

  • output_data_type (Optional[DataTypes]) – The data type of the output data

  • key (Optional[str]) – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key (Optional[str]) – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs (Union[bool, List[str]]) – A list of input keys to cache or True to cache all inputs

  • cache_outputs (bool) – A list of output keys to cache or True to cache all outputs

  • jit_compile (bool) – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim (bool) – Whether operations should expect a batch dimension in the input

  • jit_kwargs (dict | None) – Additional keyword arguments for the JIT compiler

  • jittable (bool) – Whether the operation can be JIT compiled

  • additional_output_keys (List[str]) – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(niter=100, lmbda=0.1, rect=None, eps=1e-06, **kwargs)[source]

Anisotropic diffusion filter.

Assumes input data is non-negative.

Parameters:
  • niter – Number of iterations.

  • lmbda – Lambda parameter.

  • rect – Rectangle [x1, y1, x2, y2] for homogeneous noise (optional).

  • eps – Small epsilon for stability.

Returns:

Filtered image (2D tensor or batch of images).

class zea.ops.ApplyWindow(*args, **kwargs)[source]

Bases: Operation

Apply a window function to the input data along a specific axis.

This operation can be used to zero out the end and/or beginning of the signal and apply a window of some size to transition from the zeroed region to the unmodified region.

The axis is divided into five regions: [start (zero)] - [size (window)] - [middle (unmodified)] - [size (window)] - [end (zero)]

Parameters:
  • axis (int) – Axis along which to apply the window.

  • size (int) – Size of the window to apply at the start and end regions.

  • start (int) – Number of elements to zero at the end.

  • end (int) – Number of elements to zero at the end.

  • window_type (str) – Type of window to apply. Supported types are “hanning” and “linear”.

STATIC_PARAMS = ['axis', 'size', 'window_type', 'start', 'end']
call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.BandPassFilter(axis=-3, num_taps=127, filter_key='band_pass_filter', **kwargs)[source]

Bases: FirFilter

Apply a band-pass FIR filter to the real input signal using convolution.

The bandwidth parameter in the call method defines the passband centered around demodulation_frequency, with edges at demodulation_frequency - bandwidth/2 and demodulation_frequency + bandwidth/2. So, make sure this is used before demodulation to baseband.

This operation is provided for convenience and will recompute the filter weights every time it is called. Alternatively, you can use FirFilter with pre-computed filter taps.

Initialize the BandPassFilter operation.

Parameters:
  • axis (int) – Axis along which to apply the filter. Cannot be the batch dimension. Default is -3, which is the n_ax axis for standard ultrasound data layout.

  • num_taps (int) – Number of taps in the FIR filter. Default is 127. Odd will result in a type I filter, even in a type II filter.

call(sampling_frequency, demodulation_frequency, bandwidth, **kwargs)[source]

Apply band-pass filter with specified bandwidth.

Parameters:
  • sampling_frequency (float) – Sampling frequency in Hz.

  • demodulation_frequency (float) – Center frequency in Hz.

  • bandwidth (float) – Bandwidth in Hz. The filter will pass frequencies from demodulation_frequency - bandwidth/2 to demodulation_frequency + bandwidth/2.

Returns:

Dictionary containing filtered signal.

Return type:

dict

class zea.ops.Beamform(beamformer='delay_and_sum', num_patches=100, enable_pfield=False, **kwargs)[source]

Bases: Pipeline

Classical beamforming pipeline for ultrasound image formation.

Expected input data type is DataTypes.RF_DATA which has shape (n_tx, n_ax, n_el, n_ch).

Will run the following operations in sequence: - TOFCorrection (output type DataTypes.ALIGNED_DATA: (n_tx, n_ax, n_el, n_ch)) - PfieldWeighting (optional, output type DataTypes.ALIGNED_DATA: (n_tx, n_ax, n_el, n_ch)) - Sum over channels (DAS) - Sum over transmits (Compounding) (output type DataTypes.BEAMFORMED_DATA: (grid_size_z, grid_size_x, n_ch)) - ReshapeGrid (flattened grid is also reshaped to (grid_size_z, grid_size_x))

Initialize a Delay-and-Sum beamforming zea.Pipeline.

Parameters:
  • beamformer (str) – Type of beamformer to use. Currently supporting, “delay_and_sum” and “delay_multiply_and_sum”.

  • num_patches (int) – Number of patches to split the grid into for patch-wise beamforming. If 1, no patching is performed.

  • enable_pfield (bool) – Whether to include pressure field weighting in the beamforming.

get_dict(compact=True)[source]

Convert the pipeline to a dictionary.

Unlike Pipeline.get_dict(), this does NOT include the internal operations list, since Beamform auto-generates its operations from beamformer, num_patches, and enable_pfield.

Return type:

dict

class zea.ops.ChannelsToComplex(input_data_type=None, output_data_type=None, key='data', output_key=None, cache_inputs=False, cache_outputs=False, jit_compile=True, with_batch_dim=True, jit_kwargs=None, jittable=True, additional_output_keys=None, **kwargs)[source]

Bases: Operation

Parameters:
  • input_data_type (Optional[DataTypes]) – The data type of the input data

  • output_data_type (Optional[DataTypes]) – The data type of the output data

  • key (Optional[str]) – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key (Optional[str]) – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs (Union[bool, List[str]]) – A list of input keys to cache or True to cache all inputs

  • cache_outputs (bool) – A list of output keys to cache or True to cache all outputs

  • jit_compile (bool) – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim (bool) – Whether operations should expect a batch dimension in the input

  • jit_kwargs (dict | None) – Additional keyword arguments for the JIT compiler

  • jittable (bool) – Whether the operation can be JIT compiled

  • additional_output_keys (List[str]) – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.CommonMidpointPhaseError(input_data_type=None, output_data_type=None, key='data', output_key=None, cache_inputs=False, cache_outputs=False, jit_compile=True, with_batch_dim=True, jit_kwargs=None, jittable=True, additional_output_keys=None, **kwargs)[source]

Bases: Operation

Calculates the Common Midpoint Phase Error (CMPE)

Computes CMPE between translated transmit and receive apertures with a common midpoint.

Important

Only works for multistatic datasets, e.g. synthetic aperture data.

Note

This was directly adapted from the Differentiable Beamforming for Ultrasound Autofocusing (DBUA) paper, see original paper and code.

Parameters:
  • input_data_type (Optional[DataTypes]) – The data type of the input data

  • output_data_type (Optional[DataTypes]) – The data type of the output data

  • key (Optional[str]) – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key (Optional[str]) – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs (Union[bool, List[str]]) – A list of input keys to cache or True to cache all inputs

  • cache_outputs (bool) – A list of output keys to cache or True to cache all outputs

  • jit_compile (bool) – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim (bool) – Whether operations should expect a batch dimension in the input

  • jit_kwargs (dict | None) – Additional keyword arguments for the JIT compiler

  • jittable (bool) – Whether the operation can be JIT compiled

  • additional_output_keys (List[str]) – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

create_subapertures(data, halfsa, dx)[source]

Create subapertures from the data.

Parameters:
  • data (ops.Tensor) – The data to create subapertures from.

  • halfsa (int) – Half of the subaperture.

  • dx (float) – The spacing between the subapertures.

Returns:

The transmit subapertures. receive_subap (ops.Tensor): The receive subapertures.

Return type:

transmit_subap (ops.Tensor)

process_phase_map(data, **kwargs)[source]

Create the common midpoint subaperture phase error map.

Parameters:

data (ops.Tensor) – The data to create the phase error map from.

Returns:

The phase error map.

Return type:

phase_error_map (ops.Tensor)

class zea.ops.Companding(*args, **kwargs)[source]

Bases: Operation

Companding according to the A- or μ-law algorithm.

Invertible compressing operation. Used to compress dynamic range of input data (and subsequently expand).

μ-law companding: https://en.wikipedia.org/wiki/%CE%9C-law_algorithm A-law companding: https://en.wikipedia.org/wiki/A-law_algorithm

Parameters:
  • expand (bool, optional) – If set to False (default), data is compressed, else expanded.

  • comp_type (str) – either a or mu.

  • mu (float, optional) – compression parameter. Defaults to 255.

  • A (float, optional) – compression parameter. Defaults to 87.6.

  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(mu=255, A=87.6, **kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.ComplexToChannels(*args, **kwargs)[source]

Bases: Operation

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.DelayAndSum(*args, **kwargs)[source]

Bases: Operation

Sums time-delayed signals along channels and transmits.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Performs DAS beamforming on tof-corrected input.

Parameters:

tof_corrected_data (ops.Tensor) – The TOF corrected input of shape (n_tx, prod(grid.shape), n_el, n_ch) with optional batch dimension.

Returns:

Dictionary containing beamformed_data

of shape (prod(grid.shape), n_ch) with optional batch dimension.

Return type:

dict

class zea.ops.DelayMultiplyAndSum(*args, **kwargs)[source]

Bases: Operation

Performs the operations for the Delay-Multiply-and-Sum beamformer except the delay. The delay should be performed by the TOF correction operation.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Performs DMAS beamforming on tof-corrected input.

Parameters:

tof_corrected_data (ops.Tensor) – The TOF corrected input of shape (n_tx, prod(grid.shape), n_el, n_ch) with optional batch dimension.

Returns:

Dictionary containing beamformed_data

of shape (grid_size_z*grid_size_x, n_ch) with optional batch dimension.

Return type:

dict

process_image(data)[source]

Performs DMAS beamforming on tof-corrected input.

Parameters:

data (ops.Tensor) – The TOF corrected input of shape (n_tx, n_pix, n_el, n_ch)

Returns:

The beamformed data of shape (n_pix, n_ch)

Return type:

ops.Tensor

class zea.ops.Demodulate(*args, **kwargs)[source]

Bases: Operation

Demodulates the input data to baseband. After this operation, the carrier frequency is removed (0 Hz) and the data is in IQ format stored in two real valued channels.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

ADD_OUTPUT_KEYS: List[str] = ['center_frequency', 'n_ch']
call(demodulation_frequency=None, sampling_frequency=None, **kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.Downsample(factor=1, phase=0, axis=-3, **kwargs)[source]

Bases: Operation

Downsample data along a specific axis.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

ADD_OUTPUT_KEYS: List[str] = ['sampling_frequency', 'n_ax']
call(sampling_frequency=None, n_ax=None, **kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.EnvelopeDetect(*args, **kwargs)[source]

Bases: Operation

Envelope detection of RF signals.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]
Parameters:

data (-) – The beamformed data of shape (…, grid_size_z, grid_size_x, n_ch).

Returns:

The envelope detected data

of shape (…, grid_size_z, grid_size_x).

Return type:

  • envelope_data (Tensor)

class zea.ops.FirFilter(axis, complex_channels=False, filter_key='fir_filter_taps', **kwargs)[source]

Bases: Operation

Apply a FIR filter to the input signal using convolution.

Looks for the filter taps in the input dictionary using the specified filter_key.

Parameters:
  • axis (int) – Axis along which to apply the filter. Cannot be the batch dimension and not the complex channel axis when complex_channels=True.

  • complex_channels (bool) – Whether the last dimension of the input signal represents complex channels (real and imaginary parts). When True, it will convert the signal to complex dtype before filtering and convert it back to two channels after filtering.

  • filter_key (str) – Key in the input dictionary where the FIR filter taps are stored. Default is “fir_filter_taps”.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

property valid_keys

Get the valid keys for the call method.

class zea.ops.GaussianBlur(sigma, order=0, mode='symmetric', cval=None, truncate=4.0, axes=(-3, -2), **kwargs)[source]

Bases: Filter

GaussianBlur is an operation that applies a Gaussian blur to an input image. Uses scipy.ndimage.gaussian_filter to create a kernel.

Parameters:
  • sigma (float) – Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

  • order (Union[int, Tuple[int]]) – The order of the filter along each axis is given as a sequence of integers, or as a single number. An order of 0 corresponds to convolution with a Gaussian kernel. A positive order corresponds to convolution with that derivative of a Gaussian. Default is 0.

  • mode (str) – Padding mode for the input image. Default is ‘symmetric’. See [keras docs](https://www.tensorflow.org/api_docs/python/tf/keras/ops/pad) for all options and [tensorflow docs](https://www.tensorflow.org/api_docs/python/tf/pad) for some examples. Note that the naming differs from scipy.ndimage.gaussian_filter!

  • cval (float | None) – Value to fill past edges of input if mode is ‘constant’. Default is None.

  • truncate (float) – Truncate the filter at this many standard deviations. Default is 4.0.

  • axes (Tuple[int]) – If None, input is filtered along all axes. Otherwise, input is filtered along the specified axes. When axes is specified, any tuples used for sigma, order, mode and/or radius must match the length of axes. The ith entry in any of these tuples corresponds to the ith entry in axes. Default is (-3, -2), which corresponds to the height and width dimensions of a (…, height, width, channels) tensor.

call(**kwargs)[source]

Apply a Gaussian filter to the input data.

Parameters:

data (ops.Tensor) – Input image data of shape (height, width, channels) with optional batch dimension if self.with_batch_dim.

class zea.ops.Identity(input_data_type=None, output_data_type=None, key='data', output_key=None, cache_inputs=False, cache_outputs=False, jit_compile=True, with_batch_dim=True, jit_kwargs=None, jittable=True, additional_output_keys=None, **kwargs)[source]

Bases: Operation

Identity operation.

Parameters:
  • input_data_type (Optional[DataTypes]) – The data type of the input data

  • output_data_type (Optional[DataTypes]) – The data type of the output data

  • key (Optional[str]) – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key (Optional[str]) – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs (Union[bool, List[str]]) – A list of input keys to cache or True to cache all inputs

  • cache_outputs (bool) – A list of output keys to cache or True to cache all outputs

  • jit_compile (bool) – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim (bool) – Whether operations should expect a batch dimension in the input

  • jit_kwargs (dict | None) – Additional keyword arguments for the JIT compiler

  • jittable (bool) – Whether the operation can be JIT compiled

  • additional_output_keys (List[str]) – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Returns the input as is.

Return type:

Dict

class zea.ops.Lambda(*args, **kwargs)[source]

Bases: Operation

Use any function as an operation.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

get_dict(compact=True)[source]

Serialize lambda-based operations.

Generic zea.ops.Lambda instances are intentionally rejected because arbitrary callables cannot be reliably serialized. Registered subclasses (e.g. zea.ops.keras_ops wrappers) are serialized by operation name and the callable keyword arguments.

class zea.ops.LeeFilter(sigma, mode='symmetric', cval=None, truncate=4.0, axes=(-3, -2), **kwargs)[source]

Bases: Filter

The Lee filter is a speckle reduction filter commonly used in synthetic aperture radar (SAR) and ultrasound image processing. It smooths the image while preserving edges and details. This implementation uses Gaussian filter for local statistics and treats channels independently.

Lee, J.S. (1980). Digital image enhancement and noise filtering by use of local statistics. IEEE Transactions on Pattern Analysis and Machine Intelligence, (2), 165-168.

Parameters:
  • sigma (float) – Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

  • mode (str) – Padding mode for the input image. Default is ‘symmetric’. See [keras docs](https://www.tensorflow.org/api_docs/python/tf/keras/ops/pad) for all options and [tensorflow docs](https://www.tensorflow.org/api_docs/python/tf/pad) for some examples. Note that the naming differs from scipy.ndimage.gaussian_filter!

  • cval (float | None) – Value to fill past edges of input if mode is ‘constant’. Default is None.

  • truncate (float) – Truncate the filter at this many standard deviations. Default is 4.0.

  • axes (Tuple[int]) – If None, input is filtered along all axes. Otherwise, input is filtered along the specified axes. When axes is specified, any tuples used for sigma, order, mode and/or radius must match the length of axes. The ith entry in any of these tuples corresponds to the ith entry in axes. Default is (-3, -2), which corresponds to the height and width dimensions of a (…, height, width, channels) tensor.

call(**kwargs)[source]

Apply the Lee filter to the input data.

Parameters:

data (ops.Tensor) – Input image data of shape (height, width, channels) with optional batch dimension if self.with_batch_dim.

class zea.ops.LogCompress(clip=True, **kwargs)[source]

Bases: Operation

Logarithmic compression of data.

Initialize the LogCompress operation.

Parameters:

clip (bool) – Whether to clip the output to a dynamic range. Defaults to True.

call(dynamic_range=None, **kwargs)[source]

Apply logarithmic compression to data.

Parameters:

dynamic_range (tuple, optional) – Dynamic range in dB. Defaults to (-60, 0).

Returns:

Dictionary containing log-compressed data

Return type:

dict

class zea.ops.LowPassFilterIQ(axis=-3, num_taps=127, filter_key='low_pass_filter', **kwargs)[source]

Bases: FirFilter

Apply a low-pass FIR filter to the demodulated IQ (n_ch=2) input signal using convolution.

It is recommended to use FirFilter with pre-computed filter taps for jittable operations. The LowPassFilterIQ operation itself is not jittable and is provided for convenience only.

Uses get_low_pass_iq_filter() to compute the filter taps.

Initialize the LowPassFilterIQ operation.

Parameters:
  • axis (int) – Axis along which to apply the filter. Cannot be the batch dimension and cannot be the complex channel axis (the last axis). Default is -3, which is the n_ax axis for standard ultrasound data layout.

  • num_taps (int) – Number of taps in the FIR filter. Default is 127. Odd will result in a type I filter, even in a type II filter.

call(bandwidth, sampling_frequency, center_frequency, **kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.Map(operations, argnames, in_axes=0, out_axes=0, chunks=None, batch_size=None, **kwargs)[source]

Bases: Pipeline

A pipeline that maps its operations over specified input arguments.

This can be used to reduce memory usage by processing data in chunks.

Notes

  • When chunks and batch_size are both None (default), this behaves like a normal Pipeline.

  • Changing anything other than self.output_key in the dict will not be propagated.

  • Will be jitted as a single operation, not the individual operations.

  • This class handles the batching.

For more information on how to use in_axes, out_axes, see the documentation for jax.vmap.

Example

>>> from zea.ops import Map, Pipeline, Demodulate, TOFCorrection

>>> # apply operations in batches of 8
>>> # in this case, over the first axis of "data"
>>> # or more specifically, process 8 transmits at a time

>>> pipeline_mapped = Map(
...     [
...         Demodulate(),
...         TOFCorrection(),
...     ],
...     argnames="data",
...     batch_size=8,
... )

>>> # you can also map a subset of the operations
>>> # for example, demodulate in 4 chunks
>>> # or more specifically, split the transmit axis into 4 parts

>>> pipeline_mapped = Pipeline(
...     [
...         Map([Demodulate()], argnames="data", chunks=4),
...         TOFCorrection(),
...     ],
... )
Parameters:
  • operations (List[Operation]) – List of operations to be performed.

  • argnames (Union[List[str], str]) – List of argument names (or keys) to map over. Can also be a single string if only one argument is mapped over.

  • in_axes (Union[List[Optional[int]], int]) – Axes to map over for each argument. If a single int is provided, it is used for all arguments.

  • out_axes (Union[List[Optional[int]], int]) – Axes to map over for each output. If a single int is provided, it is used for all outputs.

  • chunks (int | None) – Number of chunks to split the input data into. If None, no chunking is performed. Mutually exclusive with batch_size.

  • batch_size (int | None) – Size of batches to process at once. If None, no batching is performed. Mutually exclusive with chunks.

call(**inputs)[source]

Process input data through the pipeline.

get_dict(compact=True)[source]

Get the configuration of the pipeline.

jit()[source]

JIT compile the pipeline.

property jit_options

Get the jit_options property of the pipeline.

jittable_call(**inputs)[source]

Process input data through the pipeline.

unjit()[source]

Un-JIT compile the pipeline.

property with_batch_dim

Get the with_batch_dim property of the pipeline.

class zea.ops.Mean(*args, **kwargs)[source]

Bases: Operation

Take the mean of the input data along a specific axis.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.Normalize(*args, **kwargs)[source]

Bases: Operation

Normalize data to a given range.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

ADD_OUTPUT_KEYS: List[str] = ['minval', 'maxval']
call(**kwargs)[source]

Normalize data to a given range.

Parameters:
  • output_range (tuple, optional) – Range to which data should be mapped. Defaults to (0, 1).

  • input_range (tuple, optional) – Range of input data. If None, the range of the input data will be computed. Defaults to None.

Returns:

Dictionary containing normalized data, along with the computed

or provided input range (minval and maxval).

Return type:

dict

static to_float32(data)[source]

Converts an iterable to float32 and leaves None values as is.

property valid_keys

Get the valid keys for the call method.

class zea.ops.Operation(input_data_type=None, output_data_type=None, key='data', output_key=None, cache_inputs=False, cache_outputs=False, jit_compile=True, with_batch_dim=True, jit_kwargs=None, jittable=True, additional_output_keys=None, **kwargs)[source]

Bases: Operation

A base abstract class for operations in the pipeline with caching functionality.

Parameters:
  • input_data_type (Optional[DataTypes]) – The data type of the input data

  • output_data_type (Optional[DataTypes]) – The data type of the output data

  • key (Optional[str]) – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key (Optional[str]) – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs (Union[bool, List[str]]) – A list of input keys to cache or True to cache all inputs

  • cache_outputs (bool) – A list of output keys to cache or True to cache all outputs

  • jit_compile (bool) – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim (bool) – Whether operations should expect a batch dimension in the input

  • jit_kwargs (dict | None) – Additional keyword arguments for the JIT compiler

  • jittable (bool) – Whether the operation can be JIT compiled

  • additional_output_keys (List[str]) – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

ADD_OUTPUT_KEYS: List[str] = []
__call__(*args, **kwargs)[source]

Process the input keyword arguments and return the processed results.

Parameters:

kwargs – Keyword arguments to be processed.

Return type:

Dict

Returns:

Combined input and output as kwargs.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

clear_cache()[source]

Clear the input and output caches.

get_dict(compact=True)[source]

Get the configuration of the operation.

Parameters:

compact (bool) – If True (default), only include parameters that differ from their defaults. If False, include all parameters for full reproducibility.

property jittable

Check if the operation can be JIT compiled.

property needs_keys: set

Get a set of all input keys needed by the operation.

property output_keys: List[str]

Get the output keys of the operation.

set_input_cache(input_cache)[source]

Set a cache for inputs, then retrace the function if necessary.

Parameters:

input_cache (Dict[str, Any]) – A dictionary containing cached inputs.

set_jit(jit_compile)[source]

Set the JIT compilation flag and set the _call method accordingly.

set_output_cache(output_cache)[source]

Set a cache for outputs, then retrace the function if necessary.

Parameters:

output_cache (Dict[str, Any]) – A dictionary containing cached outputs.

property static_params

Get the static parameters of the operation.

property valid_keys: set

Get the valid keys for the call method.

class zea.ops.Pad(target_shape, uniform=True, axis=None, fail_on_bigger_shape=True, pad_kwargs=None, **kwargs)[source]

Bases: Operation, DataLayer

Pad layer for padding tensors to a specified shape.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(**kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

pad(z, target_shape, uniform=True, axis=None, fail_on_bigger_shape=True, **kwargs)[source]

Pads the input tensor z to the specified shape.

Parameters:
  • z (tensor) – The input tensor to be padded.

  • target_shape (list | tuple) – The target shape to pad the tensor to.

  • uniform (bool) – If True, ensures that padding is uniform (even on both sides). Default is False.

  • axis (Union[int, List[int]]) – The axis or axes along which target_shape was specified. If None, len(target_shape) == `len(ops.shape(z)) must hold. Default is None.

  • fail_on_bigger_shape (bool) – If True (default), raises an error if any target dimension is smaller than the input shape; if False, pads only where the target shape exceeds the input shape and leaves other dimensions unchanged.

  • kwargs – Additional keyword arguments to pass to the padding function.

Returns:

The padded tensor with the specified shape.

Return type:

tensor

class zea.ops.PatchedGrid(*args, num_patches=10, **kwargs)[source]

Bases: Map

A pipeline that maps its operations over flatgrid and flat_pfield keys.

This can be used to reduce memory usage by processing data in chunks.

For more information and flexibility, see zea.ops.Map.

Parameters:
  • operations (list) – List of operations to be performed.

  • argnames (str or list) – List of argument names (or keys) to map over. Can also be a single string if only one argument is mapped over.

  • in_axes (int or list) – Axes to map over for each argument. If a single int is provided, it is used for all arguments.

  • out_axes (int or list) – Axes to map over for each output. If a single int is provided, it is used for all outputs.

  • chunks (int, optional) – Number of chunks to split the input data into. If None, no chunking is performed. Mutually exclusive with batch_size.

  • batch_size (int, optional) – Size of batches to process at once. If None, no batching is performed. Mutually exclusive with chunks.

get_dict(compact=True)[source]

Get the configuration of the pipeline.

class zea.ops.PfieldWeighting(*args, **kwargs)[source]

Bases: Operation

Weighting aligned data with the pressure field.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(flat_pfield=None, **kwargs)[source]

Weight data with pressure field.

Parameters:

flat_pfield (ops.Tensor) – Pressure field weight mask of shape (n_pix, n_tx)

Returns:

Dictionary containing weighted data

Return type:

dict

class zea.ops.Pipeline(operations, with_batch_dim=True, jit_options='ops', jit_kwargs=None, name='pipeline', validate=True, timed=False)[source]

Bases: object

Pipeline class for processing ultrasound data through a series of operations.

Initialize a pipeline.

Parameters:
  • operations (List[Operation]) – A list of Operation instances representing the operations to be performed.

  • with_batch_dim (bool) – Whether operations should expect a batch dimension. Defaults to True.

  • jit_options (Optional[str]) –

    The JIT options to use. Must be “pipeline”, “ops”, or None.

    • ”pipeline”: compiles the entire pipeline as a single function. This may be faster but does not preserve python control flow, such as caching.

    • ”ops”: compiles each operation separately. This preserves python control flow and caching functionality, but speeds up the operations.

    • None: disables JIT compilation.

    Defaults to “ops”.

  • jit_kwargs (dict | None) – Additional keyword arguments for the JIT compiler.

  • name (str, optional) – The name of the pipeline. Defaults to “pipeline”.

  • validate (bool, optional) – Whether to validate the pipeline. Defaults to True.

  • timed (bool) – Whether to time each operation. Defaults to False.

__call__(return_numpy=False, **inputs)[source]

Process input data through the pipeline.

Return type:

Dict[str, Any]

append(operation)[source]

Append an operation to the pipeline.

call(**inputs)[source]

Process input data through the pipeline.

Return type:

Dict[str, Any]

copy()[source]

Create a copy of the pipeline.

Return type:

Pipeline

classmethod from_config(config, **kwargs)[source]

Create a pipeline from a dictionary or zea.Config object.

Parameters:
  • config (Dict) – Configuration dictionary or zea.Config object. Must have a pipeline key with a subkey operations.

  • **kwargs – Additional keyword arguments to be passed to the pipeline.

Return type:

Pipeline

Example

>>> from zea import Config, Pipeline
>>> config = Config(
...     {
...         "pipeline": {
...             "operations": [
...                 "identity",
...             ],
...         }
...     }
... )
>>> pipeline = Pipeline.from_config(config)
classmethod from_default(beamformer='delay_and_sum', num_patches=100, baseband=False, enable_pfield=False, timed=False, **kwargs)[source]

Create a default pipeline.

Parameters:
  • beamformer (str) – Type of beamformer to use. Currently supporting, “delay_and_sum” and “delay_multiply_and_sum”. Defaults to “delay_and_sum”.

  • num_patches (int) – Number of patches for the PatchedGrid operation. Defaults to 100. If you get an out of memory error, try to increase this number.

  • baseband (bool) – If True, assume the input data is baseband (I/Q) data, which has 2 channels (last dim). Defaults to False, which assumes RF data, so input signal has a single channel dim and is still on carrier frequency.

  • enable_pfield (bool) – If True, apply PfieldWeighting. Defaults to False. This will calculate pressure field and only beamform the data to those locations.

  • timed (bool, optional) – Whether to time each operation. Defaults to False.

  • **kwargs – Additional keyword arguments to be passed to the Pipeline constructor.

Return type:

Pipeline

classmethod from_json(json_string, **kwargs)[source]

Create a pipeline from a JSON string.

Parameters:
  • json_string (str) – JSON string representing the pipeline. Must have a pipeline key with a subkey operations.

  • **kwargs – Additional keyword arguments to be passed to the pipeline.

Return type:

Pipeline

Example: `python json_string = '{"pipeline": {"operations": ["identity"]}}' pipeline = Pipeline.from_json(json_string) `

classmethod from_path(file_path, **kwargs)[source]

Create a pipeline from a YAML/config file path.

Parameters:
  • file_path (str) – Path to the config file (local or hf:// URI). Must have a pipeline key with a subkey operations.

  • **kwargs – Additional keyword arguments to be passed to the pipeline.

Return type:

Pipeline

Example

>>> from zea import Config, Pipeline
>>> config = Config(
...     {
...         "pipeline": {
...             "operations": [
...                 "identity",
...             ],
...         }
...     }
... )
>>> config.to_yaml("pipeline.yaml")
>>> pipeline = Pipeline.from_path("pipeline.yaml")
classmethod from_yaml(cls, file_path, **kwargs)[source]

Deprecated. Use from_path() instead.

Return type:

Pipeline

get_dict(compact=True)[source]

Convert the pipeline to a dictionary.

Parameters:

compact (bool) – If True (default), only include parameters that differ from their defaults. If False, include all parameters for full reproducibility.

Return type:

dict

get_params(per_operation=False)[source]

Get a snapshot of the current parameters of the operations in the pipeline.

Parameters:

per_operation (bool) – If True, return a list of dictionaries for each operation. If False, return a single dictionary with all parameters combined.

property input_data_type

Get the input_data_type property of the pipeline.

insert(index, operation)[source]

Insert an operation at a specific index in the pipeline.

jit()[source]

JIT compile the pipeline.

property jit_options

Get the jit_options property of the pipeline.

property jittable

Check if all operations in the pipeline are jittable.

property key: str

Input key of the pipeline.

classmethod load(file_path, **kwargs)[source]

Load a pipeline from a JSON or YAML file.

Return type:

Pipeline

needs(key)[source]

Check if the pipeline needs a specific key at the input.

Return type:

bool

property needs_keys: set

Get a set of all input keys needed by the pipeline.

Will keep track of keys that are already provided by previous operations.

property operations: List[Operation | Pipeline]

Alias for self.layers to match the zea naming convention

property output_data_type

Get the output_data_type property of the pipeline.

property output_key: str

Output key of the pipeline.

property output_keys: set

All output keys the pipeline guarantees to produce.

prepare_parameters(probe=None, scan=None, config=None, **kwargs)[source]

Prepare Probe, Scan and Config objects for the pipeline.

Serializes zea.core.Object instances and converts them to dictionary of tensors.

Parameters:
  • probe (Probe) – Probe object.

  • scan (Scan) – Scan object.

  • config (Config) – Config object.

  • **kwargs – Additional keyword arguments to be included in the inputs.

Returns:

Dictionary of inputs with all values as tensors.

Return type:

dict

prepend(operation)[source]

Prepend an operation to the pipeline.

reinitialize()[source]

Reinitialize the pipeline in place.

reset_timer()[source]

Reset the timer for timed operations.

set_params(**params)[source]

Set parameters for the operations in the pipeline by adding them to the cache.

property static_params: List[str]

Get a list of static parameters for the pipeline.

to_config(compact=True)[source]

Convert the pipeline to a zea.Config object.

Return type:

Config

to_json(compact=True)[source]

Convert the pipeline to a JSON string.

Return type:

str

to_yaml(file_path, compact=True)[source]

Convert the pipeline to a YAML file.

Return type:

None

unjit()[source]

Un-JIT compile the pipeline.

property unjitable_ops

Get a list of operations that are not jittable.

property valid_keys: set

Get a set of valid keys for the pipeline.

This is all keys that can be passed to the pipeline as input.

validate()[source]

Validate the pipeline by checking the compatibility of the operations.

property with_batch_dim

Get the with_batch_dim property of the pipeline.

class zea.ops.ReshapeGrid(*args, **kwargs)[source]

Bases: Operation

Reshape flat grid data to grid shape.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(grid, **kwargs)[source]
Parameters:

data (-) – The flat grid data of shape (…, n_pix, …).

Returns:

The reshaped data of shape (…, grid.shape, …).

Return type:

  • reshaped_data (Tensor)

class zea.ops.ScanConvert(*args, **kwargs)[source]

Bases: Operation

Scan convert images to cartesian coordinates.

Initialize the ScanConvert operation.

Parameters:

order (int, optional) – Interpolation order. Defaults to 1. Currently only GPU support for order=1.

ADD_OUTPUT_KEYS: List[str] = ['resolution', 'x_lim', 'y_lim', 'z_lim', 'rho_range', 'theta_range', 'phi_range', 'd_rho', 'd_theta', 'd_phi']
STATIC_PARAMS = ['fill_value']
call(rho_range=None, theta_range=None, phi_range=None, resolution=None, coordinates=None, fill_value=None, **kwargs)[source]

Scan convert images to cartesian coordinates.

Parameters:
  • rho_range (Tuple) – Range of the rho axis in the polar coordinate system. Defined in meters.

  • theta_range (Tuple) – Range of the theta axis in the polar coordinate system. Defined in radians.

  • phi_range (Tuple) – Range of the phi axis in the polar coordinate system. Defined in radians.

  • resolution (float) – Resolution of the output image in meters per pixel. if None, the resolution is computed based on the input data.

  • coordinates (Tensor) – Coordinates for scan convertion. If None, will be computed based on rho_range, theta_range, phi_range and resolution. If provided, this operation can be jitted.

  • fill_value (float) – Value to fill the image with outside the defined region.

class zea.ops.Simulate(*args, **kwargs)[source]

Bases: Operation

Simulate RF data.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

ADD_OUTPUT_KEYS: List[str] = ['n_ch']
STATIC_PARAMS = ['n_ax', 'apply_lens_correction']
call(scatterer_positions, scatterer_magnitudes, probe_geometry, apply_lens_correction, lens_thickness, lens_sound_speed, sound_speed, n_ax, center_frequency, sampling_frequency, t0_delays, initial_times, element_width, attenuation_coef, tx_apodizations, **kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

class zea.ops.TOFCorrection(*args, **kwargs)[source]

Bases: Operation

Time-of-flight correction operation for ultrasound data.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

STATIC_PARAMS = ['f_number', 'apply_lens_correction']
call(flatgrid, sound_speed, polar_angles, focus_distances, sampling_frequency, f_number, demodulation_frequency, t0_delays, tx_apodizations, initial_times, probe_geometry, t_peak, tx_waveform_indices, transmit_origins, apply_lens_correction=None, lens_thickness=None, lens_sound_speed=None, sos_map=None, sos_grid_x=None, sos_grid_z=None, **kwargs)[source]

Perform time-of-flight correction on raw RF data.

Parameters:
  • raw_data (ops.Tensor) – Raw RF data to correct

  • flatgrid (ops.Tensor) – Grid points at which to evaluate the time-of-flight

  • sound_speed (float) – Sound speed in the medium

  • polar_angles (ops.Tensor) – Polar angles for scan lines

  • focus_distances (ops.Tensor) – Focus distances for scan lines

  • sampling_frequency (float) – Sampling frequency

  • f_number (float) – F-number for apodization

  • demodulation_frequency (float) – Demodulation frequency

  • t0_delays (ops.Tensor) – T0 delays

  • tx_apodizations (ops.Tensor) – Transmit apodizations

  • initial_times (ops.Tensor) – Initial times

  • probe_geometry (ops.Tensor) – Probe element positions

  • t_peak (float) – Time to peak of the transmit pulse

  • tx_waveform_indices (ops.Tensor) – Index of the transmit waveform for each transmit. (All zero if there is only one waveform)

  • transmit_origins (ops.Tensor) – Transmit origins of shape (n_tx, 3)

  • apply_lens_correction (bool) – Whether to apply lens correction

  • lens_thickness (float) – Lens thickness

  • lens_sound_speed (float) – Sound speed in the lens

  • sos_map (Tensor) – Speed-of-sound map of shape (Nz, Nx) in m/s.

  • sos_grid_x (Tensor) – x-coordinates of sos_map rows.

  • sos_grid_z (Tensor) – z-coordinates of sos_map columns.

Returns:

Dictionary containing tof_corrected_data

Return type:

dict

class zea.ops.Threshold(*args, **kwargs)[source]

Bases: Operation

Threshold an array, setting values below/above a threshold to a fill value.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(threshold=None, percentile=None, **kwargs)[source]

Threshold the input data.

Parameters:
  • threshold – Numeric threshold.

  • percentile – Percentile to derive threshold from.

Returns:

Tensor with thresholding applied.

class zea.ops.UpMix(*args, **kwargs)[source]

Bases: Operation

Upmix IQ data to RF data.

Parameters:
  • input_data_type (DataTypes) – The data type of the input data

  • output_data_type (DataTypes) – The data type of the output data

  • key – The key for the input data (operation will operate on this key) Defaults to “data”.

  • output_key – The key for the output data (operation will output to this key) Defaults to the same as the input key. If you want to store intermediate results, you can set this to a different key. But make sure to update the input key of the next operation to match the output key of this operation.

  • cache_inputs – A list of input keys to cache or True to cache all inputs

  • cache_outputs – A list of output keys to cache or True to cache all outputs

  • jit_compile – Whether to JIT compile the ‘call’ method for faster execution

  • with_batch_dim – Whether operations should expect a batch dimension in the input

  • jit_kwargs – Additional keyword arguments for the JIT compiler

  • jittable – Whether the operation can be JIT compiled

  • additional_output_keys – A list of additional output keys produced by the operation. These are used to track if all keys are available for downstream operations. If the operation has a conditional output, it is best to add all possible output keys here.

call(sampling_frequency=None, demodulation_frequency=None, **kwargs)[source]

Abstract method that defines the processing logic for the operation. Subclasses must implement this method.

zea.ops.get_ops(ops_name)[source]

Get the operation from the registry.

Modules

keras_ops

Auto-generated zea.Operation for all unary keras.ops and keras.ops.image functions.