skip to main content

Title: B-spline Parameterized Joint Optimization of Reconstruction and K-space Sampling Patterns (BJORK) for Accelerated 2D Acquisition
The proposed approach, BJORK, provides a robust and generalizable workflow to jointly optimize non-Cartesian sampling patters and a physics-informed reconstruction. Several approaches, including re-parameterization of trajectories, multi-level optimization, and non-Cartesian unrolled neural networks, are introduced to improve training effect and avoid sub-optimal local minima. The invivo experiments show that the networks and trajectories learned on simulation dataset are transferable to the real acquisition even with different parameter-weighted MRI contrasts and noise-levels, and demonstrate improved image quality compared with previous learning-based and model-based trajectory optimization methods.
Authors:
; ; ; ;
Award ID(s):
1838179
Publication Date:
NSF-PAR ID:
10309641
Journal Name:
International Society Magnetic Resonance in Medicine
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a discrete-optimization technique for finding feasible robot arm trajectories that pass through provided 6-DOF Cartesian-space end-effector paths with high accuracy, a problem called pathwise-inverse kinematics. The output from our method consists of a path function of joint-angles that best follows the provided end-effector path function, given some definition of ``best''. Our method, called Stampede, casts the robot motion translation problem as a discrete-space graph-search problem where the nodes in the graph are individually solved for using non-linear optimization; framing the problem in such a way gives rise to a well-structured graph that affords an effective best path calculationmore »using an efficient dynamic-programming algorithm. We present techniques for sampling configuration space, such as diversity sampling and adaptive sampling, to construct the search-space in the graph. Through an evaluation, we show that our approach performs well in finding smooth, feasible, collision-free robot motions that match the input end-effector trace with very high accuracy, while alternative approaches, such as a state-of-the-art per-frame inverse kinematics solver and a global non-linear trajectory-optimization approach, performed unfavorably.« less
  2. We present a virtual element method (VEM)-based topology optimization framework using polyhedral elements, which allows for convenient handling of non-Cartesian design domains in three dimensions. We take full advantage of the VEM properties by creating a unified approach in which the VEM is employed in both the structural and the optimization phases. In the structural problem, the VEM is adopted to solve the three-dimensional elasticity equation. Compared to the finite element method, the VEM does not require numerical integration (when linear elements are used) and is less sensitive to degenerated elements (e.g., ones with skinny faces or small edges). Inmore »the optimization problem, we introduce a continuous approximation of material densities using the VEM basis functions. When compared to the standard element-wise constant approximation, the continuous approximation enriches the geometrical representation of structural topologies. Through two numerical examples with exact solutions, we verify the convergence and accuracy of both the VEM approximations of the displacement and material density fields. We also present several design examples involving non-Cartesian domains, demonstrating the main features of the proposed VEM-based topology optimization framework. The source code for a MATLAB implementation of the proposed work, named PolyTop3D, is available in the (electronic) Supplementary Material accompanying this publication.« less
  3. The emergence of mobile apps (e.g., location-based services, geo-social networks, ride-sharing) led to the collection of vast amounts of trajectory data that greatly benefit the understanding of individual mobility. One problem of particular interest is next-location prediction, which facilitates location-based advertising, point-of-interest recommendation, traffic optimization,etc. However, using individual trajectories to build prediction models introduces serious privacy concerns, since exact whereabouts of users can disclose sensitive information such as their health status or lifestyle choices. Several research efforts focused on privacy-preserving next-location prediction, but they have serious limitations: some use outdated privacy models (e.g., k-anonymity), while others employ learning models withmore »limited expressivity (e.g., matrix factorization). More recent approaches(e.g., DP-SGD) integrate the powerful differential privacy model with neural networks, but they provide only generic and difficult-to-tune methods that do not perform well on location data, which is inherently skewed and sparse.We propose a technique that builds upon DP-SGD, but adapts it for the requirements of next-location prediction. We focus on user-level privacy, a strong privacy guarantee that protects users regardless of how much data they contribute. Central to our approach is the use of the skip-gram model, and its negative sampling technique. Our work is the first to propose differentially-private learning with skip-grams. In addition, we devise data grouping techniques within the skip-gram framework that pool together trajectories from multiple users in order to accelerate learning and improve model accuracy. Experiments conducted on real datasets demonstrate that our approach significantly boosts prediction accuracy compared to existing DP-SGD techniques.« less
  4. The emergence of mobile apps (e.g., location-based services,geo-social networks, ride-sharing) led to the collection of vast amounts of trajectory data that greatly benefit the understanding of individual mobility. One problem of particular interest is next-location prediction, which facilitates location-based advertising, point-of-interest recommendation, traffic optimization,etc. However, using individual trajectories to build prediction models introduces serious privacy concerns, since exact whereabouts of users can disclose sensitive information such as their health status or lifestyle choices. Several research efforts focused on privacy-preserving next-location prediction, but they have serious limitations: some use outdated privacy models (e.g., k-anonymity), while others employ learning models with limitedmore »expressivity (e.g., matrix factorization). More recent approaches(e.g., DP-SGD) integrate the powerful differential privacy model with neural networks, but they provide only generic and difficult-to-tune methods that do not perform well on location data, which is inherently skewed and sparse.We propose a technique that builds upon DP-SGD, but adapts it for the requirements of next-location prediction. We focus on user-level privacy, a strong privacy guarantee that protects users regardless of how much data they contribute. Central toour approach is the use of the skip-gram model, and its negative sampling technique. Our work is the first to propose differentially-private learning with skip-grams. In addition, we devise data grouping techniques within the skip-gram framework that pool together trajectories from multiple users in order to acceleratelearning and improve model accuracy. Experiments conducted on real datasets demonstrate that our approach significantly boosts prediction accuracy compared to existing DP-SGD techniques.« less
  5. We provide a detailed asymptotic study of gradient flow trajectories and their implicit optimization bias when minimizing the exponential loss over "diagonal linear networks". This is the simplest model displaying a transition between "kernel" and non-kernel ("rich" or "active") regimes. We show how the transition is controlled by the relationship between the initialization scale and how accurately we minimize the training loss. Our results indicate that some limit behaviors of gradient descent only kick in at ridiculous training accuracies (well beyond 10−100). Moreover, the implicit bias at reasonable initialization scales and training accuracies is more complex and not captured bymore »these limits.« less