Computational Techniques and Imaging Innovations in the Age of AI

Tuesday 1 April 2025

10:45 Registration & Coffee
11:20 Welcome
11:30 Fast Neumann Series models of acoustic propagation
I will discuss some related fast acoustic models based on Neumann Series that describe acoustic propagation in absorbing, nonlinear and heterogeneous media, with potential applications in ultrasound therapy and tomography.
Ben Cox University College London
12:00 Symmetries and Polarization Tensors for Electromagnetic Detection
In many inverse problems we want to detect, locate or classify objects. In the context of electromagnetic detection of objects in the eddy current regime polarization tensors can be used to represent the effect of an object on the field. We show how the symmetries of the object can be understood inter terms of harmonic polynomials invariant under groups. We also show how the real and imaginary parts of the rank 2-tensor can be used for classification and discuss the measurement of how the eigenvectors differ. While ML is a powerful tool for classification, novel algebra and geometry also arises in these problems. This is joint work with Paul Ledger and James Elgy.
Bill Lionheart University of Manchester
12:30 Utilising the radiative transfer equation in optical tomography
Image reconstruction in optical tomography is an ill-posed problem that needs to be approached in the framework of inverse problems. Computational solutions of this problem require modelling of light propagation. In biological tissues, the radiative transfer equation or its approximations are used as light transport models. In this talk, utilising the radiative transfer equation in optical tomography is discussed. Especially, the use of approximative models and modelling of the related errors and uncertainties are studied.
Tanja Tarvainen University of Eastern Finland
13:00 Lunch & Poster Session
Xinyuan Wang
X-ray Computed Tomography (CT) is an essential imaging modality to reveal the interior of static objects in many fields like science, healthcare and industrial inspection. However, the sequential nature of its data acquisition limits its capacity to visualize dynamic processes with high temporal resolution and conventional image reconstruction techniques applied to dynamic CT result in considerable motion artifacts. While more sophisticated dynamic image reconstruction methods can improve upon the spatial and temporal resolution to some extend, the full potential of dynamic CT is not yet realized. We propose a multiscale temporal learning approach to accelerate the Time-dependent Deep Image Prior and enhance reconstruction quality. Our framework integrates these techniques into real-world dynamic CT applications, leveraging deep generative models in an unsupervised setting. We validate its effectiveness on both simulated and real data, demonstrating faster convergence and improved reconstruction fidelity.
Ziyang Chen
I explored novel applications of cameras that capture images without conventional lenses, instead using computational methods to reconstruct images from encoded light patterns. This approach enables extremely compact and low-cost imaging systems with wide fields of view. I have developed a tracking system combining laser speckle patterns with lensless imaging that achieves sub-degree accuracy for AR/VR applications. My research aims to leverage miniaturized imaging system to achieve novel data captures.
Shady Adib
This abstract outlines the development of self-generating digital twins enhanced by generative AI, physics-informed machine learning, and dynamic diffusion models for real-time structural health monitoring (SHM). By integrating inverse problem-solving techniques with advanced AI methods, this framework addresses key challenges in structural damage detection, uncertainty quantification, and predictive analysis. These innovations provide fast, accurate, and reliable solutions, contributing to infrastructure resilience and sustainability. This research aligns with the workshop's themes, demonstrating how the synergy of classical mathematics and AI can transform practical applications in engineering and beyond.
Serban Cristian Tudosie
Learned single-pixel fluorescent microscopy. Abstract: Single-pixel imaging (SPI) has recently become a key imaging technique in microscopy, significantly reducing the cost of multispectral and time-resolved systems. SPI relies on compressive sensing theory and requires retrieving the image from multiple measurements – dot products between the sample and structured light. In practice, current approaches solve an ROF model for the compressed noisy measurements and use orthogonal pseudorandom patterns for illumination. We show that we can achieve higher compression and quality by leveraging data to learn the patterns and the reconstruction function, accelerating scientific discoveries in fluorescent microscopy.
Sam Porter
We propose a new method for the joint reconstruction of PET and SPECT data in the context of Yttrium-90 emission tomography using directional total nuclear variation prior. We present a preliminary comparison with another state of the art method, hybrid kernelised expectation maximisation. The potential advantages and trade-offs of each method are explored, with a particular focus on quantification as a key factor in improving dosimetry.
Oriol Roche i Morgó
Electron ptychography is an imaging method which provides 2D and 3D images of materials at atomic-level resolution, by finding the complex function which describes the electron (illumination) probe and the object. We present a machine learning model which gives an initial guess for the probe when presented with a set of raw data. This guess is then used as a seed for the ptychographic reconstruction algorithm called extended ptychographic iterative engine (ePIE), which potentially saves time and improves convergence, therefore simplifying the imaging and reconstruction process. This can also be extended to x-ray ptychography.
Nazila Kazemigazestane
PhD project combines Spectral Domain Optical Coherence Tomography (SD-OCT) with advanced deep learning techniques for non-invasive assessments with the aim of early caries detection. Its objectives include enhancing diagnostic accuracy, developing deep learning models for caries classification, and establishing an open-access OCT dataset. By integrating OCT with machine learning, the research emphasizes earlier, more accurate, and non-invasive care, advancing precision dentistry and ultimately improving patient outcomes.
Nadja Gruber
We propose Noisier2Inverse, a self-supervised deep learning approach for joint denoising and reconstruction of inverse problems with statistically correlated noise. Such noise appears in computed tomography, microscopy, and seismic imaging due to detector imperfections, photon scattering, or physical interactions. Building on Noisier2Noise, our method eliminates the need for clean-noisy image pairs and requires only a few training samples. We design a specialized loss function that outperforms two-step approaches and surpasses state-of-the-art self-supervised CT reconstruction in correlated noise scenarios. A key assumption is knowledge of the noise distribution, which we assess through ablation studies to determine required parameter accuracy.
Maximilian Kiss
In this benchmarking study, we use 2DeteCT, a real-world experimental computed tomography dataset, for comparing machine learning based CT image reconstruction algorithms. We provide a pipeline for easy implementation and evaluation and test post-processing methods, learned/unrolled iterative methods, learned regularizer methods, and plug-and-play methods using key performance metrics, including SSIM and PSNR. We showcase the effectiveness of various algorithms on tasks such as full data reconstruction, limited-angle reconstruction, sparse-angle reconstruction, low-dose reconstruction, and beam-hardening corrected reconstruction. The reproducible setup of methods and CT image reconstruction tasks in LION, an open-source toolbox, enables straightforward addition and comparison of new methods.
Matt Foster
Weakly Nonlinear Ray Theory with Applications in HIFU: Focused ultrasound is used in a therapeutic treatment (HIFU) and uses ultrasound waves to non-invasively destroy malignant cells inside the human body. This work presents a nonlinear ray tracing approach to modelling the focused waves. The ray equations are identical to those from linear ray theory while the amplitude equation is a nonlinear transformation of the Burgers' equation. These rays ignore the diffraction of the wave, so are replaced by Finite Frequency rays to avoid caustics. The results of this method are then compared with the pseudo-spectral solution of the Westervelt equation.
Martin Ludvigsen
We show Maximum Discrepancy NMF (MDNMF), a variant of Non-Negative Matrix Factorization (NMF) for inverse problems and source separation. MDNMF learns both features of true and adversarial data. We demonstrate how NMF dictionaries can automatically detect and remove noise, leading to improved signal reconstruction in image and audio data.
14:30 Inverse is the way forward: Data-driven Learning and Knowledge-based Reasoning Symbiosis for Principled Symbolic Discovery
Abstracting system behavior into mathematical models is essential for science and engineering. Scientific discovery requires reconciling noisy data with incomplete background knowledge of universal laws. Historically, models were derived deductively from first principles—yielding interpretable, universal models from limited data, but demanding significant expertise and time. Meanwhile, statistical AI enables rapid model construction with remarkable scalability due to predetermined functional structures. However, these approaches often produce non-interpretable models requiring extensive training data and showing limited generalization. This talk examines recent efforts to bridge statistical and symbolic AI. We'll discuss algorithms that search for free-form symbolic models without predetermined structures or primitives. We'll review innovations in automated theorem proving (ATP) for certifying hypothesis models against background theory. Finally, we shall explore approaches to unify these complementary methods.
Lior Horesh IBM Research
15:00 Survival and Why?
A central tool in Survival Analysis is the Cox Model. We argue that despite its utility in as a ranking objective there are potentially better alternatives for time to death prediction, with an example application to Idiopathic Pulmonary Fibrosis. A second issue we address is how to interpret medical images to identify biomarkers. We argue that standard perturbative methods such as GradCam are unreliable and we provide a robust alternative that is inherently interpretable. We apply the method to identifying biomarkers of IPF in CT scans, revealing very different results than standard perturbation analysis.
David Barber University College London
15:30 Coffee Break
16:00 Learning regularisation for inverse problems
Inverse problems are about the reconstruction of an unknown physical quantity from indirect measurements. Most inverse problems of interest are ill-posed and require appropriate mathematical treatment for recovering meaningful solutions. Regularization is one of the main mechanisms to turn inverse problems into well-posed ones by adding prior information about the unknown quantity to the problem, often in the form of assumed regularity of solutions. Classically, such regularization approaches are handcrafted. Examples include Tikhonov regularization, the total variation and several sparsity-promoting regularizers such as the L1L1 norm of Wavelet coefficients of the solution. While such handcrafted approaches deliver mathematically and computationally robust solutions to inverse problems, providing a universal approach to their solution, they are also limited by our ability to model solution properties and to realise these regularization approaches computationally. Recently, a new paradigm has been introduced to the regularisation of inverse problems, which derives regularisation approaches for inverse problems in a data driven way. Here, regularisation is not mathematically modelled in the classical sense, but modelled by highly over-parametrised models, typically deep neural networks, that are adapted to the inverse problems at hand by appropriately selected (and usually plenty of) training data. In this talk, I will review some machine learning based regularisation techniques, present some work on unsupervised and deeply learned (weakly) convex regularisers and their application to image reconstruction from tomographic measurements.
Carola-Bibiane Schönlieb University of Cambridge
16:30 Open Source Software as a tool for research in advanced synergistic image reconstruction
The use of Open Source Software (OSS) continues to grow in the research community, ideally complemented by open data. There is therefore a strong need for researchers to contribute to OSS to enhance its capabilities. In this talk, I will present examples of advanced image reconstruction methods that were developed in joint research with Prof Arridge and illustrate how the research benefitted from existing OSS tools, but also how initial proof-of-concept code has been further developed into contributions to various OSS packages, enabling other researchers to evaluate and improve these methods. Examples will be from multi-modality reconstruction using PET, SPECT and/or MR data.
Kris Thielemans University College London
17:00 Break
19:00 Conference Dinner

Wednesday 2 April 2025

10:00 On the early stopping of untrained convolutional neural networks
Bangti Jin The Chinese University of Hong Kong
10:30 Back to the roots - a simple analysis of how learning with the wrong error level effects reconstructions
Learning regularization schemes for inverse problems more often than not do use data with a fixed noise level. We use a somewhat simplified abstract analytical model in order to analyse the effect of applying such methods to data with different noise levels. As a start we analyse how classical regularzation schemes such as Tikhonov, where the regularization parameter is chosen according to some noise level, behave when applied to data with rather different noise levels. This is joint work with M. Iske.
Peter Maaß Universität Bremen
11:00 Coffee Break
11:30 No time to waste - faster optical imaging and MRI scans
Non-invasive imaging technologies have improved our understanding of human health and disease. Recently, advances in imaging speed, coupled with faster imaging reconstruction, processing and analysis methodologies, have improved accuracy and efficiency, subject comfort and scan tolerance, facilitated dynamic studies across various fields of research and clinical practice, enabled real-time monitoring and quantitative analyses. In this talk, I will describe some of our past and ongoing research focused on the development of accelerated workflows, including AI-based approaches, particularly for cardiac magnetic resonance imaging (CMR) and 3D optical imaging.
Teresa Correia Centro de Ciências do Mar do Algarve
12:00 Adjoint Sobolev Embedding Operators and their Connection to Inverse Problems
Sobolev Spaces are useful both for the formulation of Inverse Problem and the incorporation of, e.g., smoothness properties or statistics of their solution. As a result, the Sobolev embedding operator and its adjoint are common components in both iterative and variational regularization methods for the computation of a solution. However, while the embedding operator itself is trivial, its adjoint is typically not, and the study of its properties and different representations is of importance both theoretically and practically. Hence, in this talk we will present different characterizations of the adjoint embedding operators and their use in standard Tomography, Atmospheric Tomography and Photoacoustic Tomography.
Ronny Ramlau Johannes Kepler Universität Linz
12:30 Lunch & Poster Session
Marcos Obando
In the Bayesian framework, the solution of the inverse problem of reconstructing the image is given by the posterior distribution which combines our prior knowledge about the image with the measurement information. When using state-of-the-art implicit priors, the posterior can only be accessed by sophisticated sampling techniques. In sequential OED (sOED), we adapt the sensing pattern during the measurement process. Due to prohibiting real-time implementations, we explore solving sOED using reinforcement learning (RL): RL attempts to learn a policy, i.e., a mapping of the current posterior distribution to a probability distribution over which sensing pattern is best to choose next.
Klara Bas
Quantitative MRI enables comparable results in longitudinal and multicentre studies. I use variational inference for parcellating grey matter structures like amygdala from multimodal MRI data. Multiparametric Mapping (MPM) data is used to estimate PD, MT, R1, and R2* maps, while diffusion data is used to generate probabilistic tracts modeling neural pathways. These are represented as a 2D connectivity histogram. A generative model, is optimised with variational Bayes inference to cluster voxels, providing tissue priors in average space and subject-specific parcellations. This data-driven, in-vivo approach enhances structural segmentation for neuroscience research.
Lara Bonney
PET imaging data contains a wealth of quantitative information (radiomics) that could provide valuable contributions to characterising tumours when combined with other 'omic data. However, dependence on image acquisition and reconstruction methods limits the generalisability of radiomic models. Radiomic feature stability to image noise was assessed by varying reconstruction methods, using a dual approach in a retrospective sarcoma clinical dataset (N=20) and a modular heterogeneous imaging phantom. The next phase will identify reliable and useful features for heterogeneity assessment in a 10-year retrospective sarcoma cohort from Oxford University Hospitals (N=1,052).
Lara Carter
My research focuses on leveraging Natural Language Processing (NLP) to address inefficiencies and fragmentation in pharmacokinetic (PK) modelling. By developing a unified framework for automating the extraction and synthesis of PK data from biomedical texts, my work integrates Named Entity Recognition (NER) and Named Entity Linking (NEL) using a multi-task learning approach. This innovative pipeline mitigates error propagation and enhances data processing efficiency. Extracted data will form a comprehensive PK database to optimize dosing model selection through neural networks and algorithmic methods. Ultimately, this research aims to streamline clinical trials, reduce costs, and improve drug development outcomes.
Imraj Singh
In this work we will show the direct formulation of inverse problems with Gaussian basis-functions. In particular, we introduce added flexibility by making the basis-functions learnable.
Jakob Reichmann
X-ray nano-holotomography (XNH) has recently emerged as a scalable, non-destructive, imaging modality complementary to electron microscopy, most prominently for 3D tissue analysis and connectomics research. To increase the resolution of XNH to reliably identify synaptic connections and fine axons or spine heads, resolving power can be increased by algorithmical/computational advances, namely new iterative reconstruction schemes for simultaneous retrieval of the complex-valued probe function. Additionally, ANNs can enhance image quality through denoising and transfer learning between multimodal, correlative datasets. These advancements position XNH as a powerful tool for connectomics research.
Andrea Mazzolani
As an early-career researcher with a PhD in biomedical engineering and five years of experience in deep learning, I specialise in developing innovative models for biomedical imaging and optical coherence elastography. My work includes optimising convolutional neural networks (CNNs) for strain prediction in tissue samples and accelerating computational frameworks for realistic image simulations. I am eager to deepen my knowledge of cutting-edge deep learning techniques and engage with experts in the field to further advance my research and applications.
Emilio McAllister Fognini
We propose a neural network architecture, Neural-FMM, that integrates the Fast Multipole Method (FMM) into a hierarchical machine learning framework for parameterising the Green's operator of elliptic PDEs. Our architecture leverages the FMM's computation flow to efficiently separate local and far-field interactions, learning their respective representations. The Neural-FMM replaces the traditional hand-crafted FMM translation operators with deep feedforward neural networks while maintaining non-local information flow. We will also discuss modifications to handle non-stationary kernels and numerical experiments on inhomogeneous and heterogeneous Helmholtz equations to demonstrate the Neural-FMM's effectiveness in solving scattering problems with variable sound speed maps and incident fields.
Elliott Macneil
Photoacoustic tomography (PAT) is a hybrid imaging technique based on the photoacoustic effect. The PAT forward problem can be modelled as an initial value problem for the linear wave equation. Fourier Neural Operators (FNO) offer a data-driven, resolution-invariant approach to efficiently solve this problem. However, due to Fourier mode truncation, they struggle to express higher frequency data. Gaussian Beams (GB), on the other hand, are an example of a high frequency approximation to the linear wave equation. We propose a novel hybrid GB-FNO approach for efficiently solving the PAT forward problem, whilst capturing the whole frequency spectrum.
Dongdong Chen
Exploring self-supervised learning and generative modelling for inverse problems in computational imaging and vision.
14:00 Optical Tomography Redux
I will revisit some old stories from the golden age of optical tomography and will discuss a possible role for machine learning. This is joint work with Lu Lu and Zijian Wang.
John Schotland Yale University
14:30 Multispectral fluorescence lifetime microscopy based on computational imaging techniques
We present a wide-field 3D multispectral fluorescence lifetime imaging (λ FLIM) microscope combining compressive sensing, single-pixel camera, structured illumination and data fusion techniques in an integrated computational imaging approach towards real-time acquisition of multidimensional optically sectioned images. The extension of the proposed approach for wide field imaging of biological tissue will be discussed. This is joint work with Valerio Gandolfi, Federico Simoni, Alberto Ghezzi, Shivaprasad Varakkoth, Serban C. Tudosie, Simon Arridge, Gianluca Valentini, and Andrea Farina.
Cosimo d'Andrea Politecnico di Milano
15:00 The Light Side and the Dark Side: Inverse Problems in the Real World
Optical tomography is a powerful imaging technique that has benefited from the integration of computational methods and artificial intelligence (AI). This talk will explore the latest advancements in optical tomography, focusing on the role of AI in enhancing image reconstruction and analysis. We will discuss the challenges and opportunities in the field, and how the synergy between computational techniques and AI is paving the way for new discoveries and applications in biomedical imaging and other areas.
Simon Arridge University College London
15:15 Reception