About

zea is a toolbox intended to support research towards cognitive ultrasound imaging, a concept described in van Sloun [A-1]. The central idea is to close the action-perception loop in ultrasound imaging, where acquisition and reconstruction are tightly coupled to tackle some of the persistent challenges in the field of ultrasound imaging.

High-level overview of an ultrasound perception-action loop implemented in zea.

Vision

The toolbox is intended for anyone exploring cutting-edge ultrasound research and development who wants to integrate the latest advances in probabilistic machine learning into a fast and flexible ultrasound image reconstruction pipeline. Many persistent challenges — such as artifacts (haze, reverberation, shadowing, aberration), limited resolution or penetration depth, and the inherent trade-off between image quality, field of view, and acquisition time — can be approached by closing the action-perception loop. Where and how you measure ultrasound data (action), greatly influences how well you can reconstruct an image, or estimate a certain diagnostic parameter (perception).

This imaging paradigm is largely enabled by the availability of powerful statistical models that can learn from data to improve reconstruction in difficult scenarios [A-2], for example when we have limited measurements. Besides reconstruction, these models can also guide the acquisition process by optimizing the transmit sequence for a certain downstream task, e.g., a doppler measurement [A-3], estimation of a diagnostic biomarker [A-4], or segmentation of a certain structure [A-5].

To enable cognitive ultrasound imaging, it is important that the traditional ultrasound image reconstruction pipeline is tightly integrated with the models and algorithms that are used to learn from data and optimize the acquisition process. This toolbox provides a modular and flexible framework to do so, which will help researchers minimize the time from idea conceptualization to implementation by bypassing the time to develop the necessary infrastructure to integrate the different components that enable cognitive ultrasound (data & parameter handling and loading, differentiable ultrasound reconstruction pipeline, model infrastructure, etc.).

While the full realization of cognitive ultrasound imaging remains an ongoing effort, we hope this toolbox will help spur further research and development in the field.

Note

What’s in a name?

It’s just a name… If we have to give it some meaning: zea is derived from the scientific name for corn, Zea mays, a staple food crop. If you look at the logo, you can see that the kernels of the corn cob have some resemblance with either a sensing matrix or possibly the elements of an array. The high-dimensional and structured nature of the corn cob also reflects the complexity of ultrasound data.

Core maintainers

Active contributors

A list of active contributors can be found on the GitHub contributors page. If you would like to contribute, please see the Contributing guide.

License

This project is licensed under the Apache License 2.0.

Citation

Please see the Citation guide for citation information of zea.

Papers

The following list contains some of the papers that have been published using zea. If you have used zea in your work, please consider adding it to the list by creating a pull request on GitHub. See the Contributing guide for more information.

[A-1]

Ruud J. G. van Sloun. Active inference and deep generative modeling for cognitive ultrasound. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 71(11):1478–1490, 2024. Conference Name: IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. doi:10.1109/TUFFC.2024.3466290.

[A-2]

Tristan S. W. Stevens. Ultrasound Imaging in the Era of Deep Generative Modeling. Eindhoven University of Technology, Eindhoven, The Netherlands, 2026. ISBN 978-94-6537-111-5. PhD thesis. URL: https://research.tue.nl/nl/publications/ultrasound-imaging-in-the-era-of-deep-generative-modeling.

[A-3]

Beatrice Federici, Ruud J. G. van Sloun, and Massimo Mischi. Active Inference for Closed-loop Transmit Beamsteering in Fetal Doppler Ultrasound. IEEE Open Journal of Ultrasonics, Ferroelectrics, and Frequency Control, pages 1–1, 2025. doi:10.1109/OJUFFC.2025.3636108.

[A-4]

Oisín Nolan, Wessel L. van Nierop, Louis D. van Harten, Tristan S. W. Stevens, and Ruud J. G. van Sloun. Task-based adaptive transmit beamforming for efficient ultrasound quantification. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain. 2026. doi:10.48550/ARXIV.2601.20711.

[A-5]

Wessel L. van Nierop, Oisín Nolan, Tristan S. W. Stevens, and Ruud J. G. van Sloun. Patient-Adaptive Focused Transmit Beamforming using Cognitive Ultrasound. CoRR, 2025. doi:10.48550/ARXIV.2508.08782.

[A-6]

Vincent van de Schaft, Oisín Nolan, and Ruud J. G. van Sloun. Off-grid Ultrasound Imaging by Stochastic Optimization. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 72:1245–1255, 2025. doi:10.1109/TUFFC.2025.3586377.

[A-7]

Tristan S. W. Stevens, Faik C. Meral, Jason Yu, Iason Zacharias Apostolakis, Jean-Luc Robert, and Ruud J. G. van Sloun. Dehazing Ultrasound Using Diffusion Models. IEEE Trans. Medical Imaging, 43(10):3546–3558, 2024. doi:10.1109/TMI.2024.3363460.

[A-8]

Tristan S. W. Stevens, Oisín Nolan, Jean-Luc Robert, and Ruud J. G. van Sloun. Sequential Posterior Sampling with Diffusion Models. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 1–5. IEEE, 2025. doi:10.1109/ICASSP49660.2025.10889752.

[A-9]

Tristan S. W. Stevens, Oisín Nolan, Oudom Somphone, Jean-Luc Robert, and Ruud J. G. van Sloun. High Volume Rate 3D Ultrasound Reconstruction with Diffusion Models. IEEE Trans. Medical Imaging, 2025. doi:10.1109/TMI.2025.3645849.

[A-10]

Oisín Nolan, Tristan S. W. Stevens, Wessel L. van Nierop, and Ruud van Sloun. Active Diffusion Subsampling. Trans. Mach. Learn. Res., 2025. URL: https://openreview.net/forum?id=OGifiton47.

[A-11]

Simon W. Penninga, Hans Van Gorp, and Ruud J. G. van Sloun. Deep Sylvester Posterior Inference for Adaptive Compressed Sensing in Ultrasound Imaging. In 2025 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2025, Hyderabad, India, April 6-11, 2025, 1–5. IEEE, 2025. doi:10.1109/ICASSP49660.2025.10888253.

[A-12]

Tristan S. W. Stevens, Jeroen Overdevest, Oisín Nolan, Wessel L van Nierop, Ruud J. G. van Sloun, and Yonina C Eldar. Deep generative models for Bayesian inference on high-rate sensor data: applications in automotive radar and medical imaging. Philosophical Transactions A, 383(2299):20240327, 2025. doi:10.1098/rsta.2024.0327.

[A-13]

Ben Luijten, Regev Cohen, Frederik J. de Bruijn, Harold A. W. Schmeitz, Massimo Mischi, Yonina C. Eldar, and Ruud J. G. van Sloun. Adaptive Ultrasound Beamforming Using Deep Learning. IEEE Trans. Medical Imaging, 39(12):3967–3978, 2020. doi:10.1109/TMI.2020.3008537.

[A-14]

Tristan S. W. Stevens, Oisín Nolan, and Ruud J. G. van Sloun. Semantic diffusion posterior sampling for cardiac ultrasound dehazing. In MICCAI DehazingEcho Challenge. Daejeon, Republic of Korea, 2025. doi:10.48550/ARXIV.2508.17326.

[A-15]

Tristan S. W. Stevens, Oisín Nolan, Jean-Luc Robert, and Ruud J. G. van Sloun. Nuclear Diffusion Models for Low-Rank Background Suppression in Videos. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain. 2026. doi:10.48550/ARXIV.2509.20886.

[A-16]

Beatrice Federici, Ruud J. G. van Sloun, and Massimo Mischi. Information seeking transmit beamforming for cognitive ultrasound. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain. 2026.