Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

Deep neural networks (DNNs) have shown their success as highdimensional function approximators in many applications; however, training DNNs can be challenging in general. DNN training is commonly phrased as a stochastic optimization problem whose challenges include nonconvexity, nonsmoothness, insufficient regularization, and complicated data distributions. Hence, the performance of DNNs on a given task depends crucially on tuning hyperparameters, especially learning rates and regularization parameters. In the absence of theoretical guidelines or prior experience on similar tasks, this requires solving many training problems, which can be timeconsuming and demanding on computational resources. This can limit the applicability of DNNs to problemsmore »Free, publiclyaccessible full text available July 1, 2023

We propose a neural network approach that yields approximate solutions for highdimensional optimal control problems and demonstrate its effectiveness using examples from multiagent path finding. Our approach yields controls in a feedback form, where the policy function is given by a neural network (NN). Specifically, we fuse the HamiltonJacobiBellman (HJB) and Pontryagin Maximum Principle (PMP) approaches by parameterizing the value function with an NN. Our approach enables us to obtain approximately optimal controls in realtime without having to solve an optimization problem. Once the policy function is trained, generating a control at a given spacetime location takes milliseconds; in contrast,more »Free, publiclyaccessible full text available July 1, 2023

Two segmentation methods, one atlasbased and one neuralnetworkbased, were compared to see how well they can each automatically segment the brain stem and cerebellum in Displacement Encoding with Stimulated Echoes Magnetic Resonance Imaging (DENSEMRI) data. The segmentation is a prerequisite for estimating the average displacements in these regions, which have recently been proposed as biomarkers in the diagnosis of Chiari Malformation type I (CMI). In numerical experiments, the segmentations of both methods were similar to manual segmentations provided by trained experts. It was found that, overall, the neuralnetworkbased method alone produced more accurate segmentations than the atlasbased method did alone,more »Free, publiclyaccessible full text available July 1, 2023

Free, publiclyaccessible full text available April 1, 2023

To analyze the abundance of multidimensional data, tensorbased frameworks have been developed. Traditionally, the matrix singular value decomposition (SVD) is used to extract the most dominant features from a matrix containing the vectorized data. While the SVD is highly useful for data that can be appropriately represented as a matrix, this step of vectorization causes us to lose the highdimensional relationships intrinsic to the data. To facilitate efficient multidimensional feature extraction, we utilize a projectionbased classification algorithm using the tSVDM, a tensor analog of the matrix SVD. Our work extends the tSVDM framework and the classification algorithm, both initially proposedmore »Free, publiclyaccessible full text available October 31, 2022

Deep generative models (DGM) are neural networks with many hidden layers trained to approximate complicated, highdimensional probability distributions using a large number of samples. When trained successfully, we can use the DGMs to estimate the likelihood of each observation and to create new samples from the underlying distribution. Developing DGMs has become one of the most hotly researched fields in artificial intelligence in recent years. The literature on DGMs has become vast and is growing rapidly. Some advances have even reached the public sphere, for example, the recent successes in generating realisticlooking images, voices, or movies; socalled deep fakes. Despitemore »

We propose a neural network approach for solving highdimensional optimal control problems. In particular, we focus on multiagent control problems with obstacle and collision avoidance. These problems immediately become highdimensional, even for moderate phasespace dimensions per agent. Our approach fuses the Pontryagin Maximum Principle and HamiltonJacobiBellman (HJB) approaches and parameterizes the value function with a neural network. Our approach yields controls in a feedback form for quick calculation and robustness to moderate disturbances to the system. We train our model using the objective function and optimality conditions of the control problem. Therefore, our training algorithm neither involves a data generationmore »

A normalizing flow is an invertible mapping between an arbitrary probability distribution and a standard normal distribution; it can be used for density estimation and statistical inference. Computing the flow follows the change of variables formula and thus requires invertibility of the mapping and an efficient way to compute the determinant of its Jacobian. To satisfy these requirements, normalizing flows typically consist of carefully chosen components. Continuous normalizing flows (CNFs) are mappings obtained by solving a neural ordinary differential equation (ODE). The neural ODE's dynamics can be chosen almost arbitrarily while ensuring invertibility. Moreover, the logdeterminant of the flow's Jacobianmore »