skip to main content

This content will become publicly available on March 1, 2023

Title: Cartesian Operator Factorization Method for Hydrogen
We generalize Schrödinger’s factorization method for Hydrogen from the conventional separation into angular and radial coordinates to a Cartesian-based factorization. Unique to this approach is the fact that the Hamiltonian is represented as a sum over factorizations in terms of coupled operators that depend on the coordinates and momenta in each Cartesian direction. We determine the eigenstates and energies, the wavefunctions in both coordinate and momentum space, and we also illustrate how this technique can be employed to develop the conventional confluent hypergeometric equation approach. The methodology developed here could potentially be employed for other Hamiltonians that can be represented as the sum over coupled Schrödinger factorizations.
Authors:
; ;
Award ID(s):
1915130
Publication Date:
NSF-PAR ID:
10343033
Journal Name:
Atoms
Volume:
10
Issue:
1
Page Range or eLocation-ID:
14
ISSN:
2218-2004
Sponsoring Org:
National Science Foundation
More Like this
  1. SUMMARY Within the field of seismic modelling in anisotropic media, dynamic ray tracing is a powerful technique for computation of amplitude and phase properties of the high-frequency Green’s function. Dynamic ray tracing is based on solving a system of Hamilton–Jacobi perturbation equations, which may be expressed in different 3-D coordinate systems. We consider two particular coordinate systems; a Cartesian coordinate system with a fixed origin and a curvilinear ray-centred coordinate system associated with a reference ray. For each system we form the corresponding 6-D phase spaces, which encapsulate six degrees of freedom in the variation of position and momentum. The formulation of (conventional) dynamic ray tracing in ray-centred coordinates is based on specific knowledge of the first-order transformation between Cartesian and ray-centred phase-space perturbations. Such transformation can also be used for defining initial conditions for dynamic ray tracing in Cartesian coordinates and for obtaining the coefficients involved in two-point traveltime extrapolation. As a step towards extending dynamic ray tracing in ray-centred coordinates to higher orders we establish detailed information about the higher-order properties of the transformation between the Cartesian and ray-centred phase-space perturbations. By numerical examples, we (1) visualize the validity limits of the ray-centred coordinate system, (2) demonstrate themore »transformation of higher-order derivatives of traveltime from Cartesian to ray-centred coordinates and (3) address the stability of function value and derivatives of volumetric parameters in a higher-order representation of the subsurface model.« less
  2. We present an efficient voxelization method to encode the geometry and attributes of 3D point clouds obtained from autonomous vehicles. Due to the circular scanning trajectory of sensors, the geometry of LiDAR point clouds is inherently different from that of point clouds captured from RGBD cameras. Our method exploits these specific properties to representing points in cylindrical coordinates instead of conventional Cartesian coordinates. We demonstrate that Region Adaptive Hierarchical Transform (RAHT) can be extended to this setting, leading to attribute encoding based on a volumetric partition in cylindrical coordinates. Experimental results show that our proposed voxelization outperforms conventional approaches based on Cartesian coordinates for this type of data. We observe a significant improvement in attribute coding performance with 5-10% reduction in bitrate and octree representation with 35-45% reduction in bits.
  3. How to effectively represent camera pose is an essential problem in 3D computer vision, especially in tasks such as camera pose regression and novel view synthesis. Traditionally, 3D position of the camera is represented by Cartesian coordinate and the orientation is represented by Euler angle or quaternions. These representations are manually designed, which may not be the most effective representation for downstream tasks. In this work, we propose an approach to learn neural representations of camera poses and 3D scenes, coupled with neural representations of local camera movements. Specifically, the camera pose and 3D scene are represented as vectors and the local camera movement is represented as a matrix operating on the vector of the camera pose. We demonstrate that the camera movement can further be parametrized by a matrix Lie algebra that underlies a rotation system in the neural space. The vector representations are then concatenated and generate the posed 2D image through a decoder network. The model is learned from only posed 2D images and corresponding camera poses, without access to depths or shapes. We conduct extensive experiments on synthetic and real datasets. The results show that compared with other camera pose representations, our learned representation is moremore »robust to noise in novel view synthesis and more effective in camera pose regression.« less
  4. Geometric morphometrics (GM) is a powerful analytical approach for evaluating phenotypic variation relevant to taxonomy and systematics, and as with any statistical methodology, requires adherence to fundamental assumptions for inferences to be strictly valid. An important consideration for GM is how landmark configurations, which represent sets of anatomical loci for evaluating shape variation through Cartesian coordinates, relate to underlying homology (Zelditch et al. 1995; Polly 2008). Perhaps more so than with traditional morphometrics, anatomical homology is a crucial assumption for GM because of the mathematical and biological interpretations associated with shape change depicted by deformation grids, such as the thin plate spline (Klingenberg 2008; Zelditch et al. 2012). GM approaches are often used to analyze shapes or outlines of structures, which are not necessarily related to common ancestry, and in this respect GM approaches that use linear semi-landmarks and related methods are particularly amenable to evaluating primary homology, or raw similarity between structures (De Pinna 1991; Palci & Lee 2019). This relaxed interpretation of homology that focuses more on recognizable and repeatable landmarks is defensible so long as authors are clear regarding the purpose of the analyses and in defining their landmark configurations (Palci & Lee 2019). Secondary homology, ormore »similarity due to common ancestry, can also be represented with GM methods and is often assumed to be reflected in fixed Type 1 (juxtaposition of tissues) or Type 2 (self-evident geometry) landmarks (Bookstein 1991).« less
  5. What learning algorithms can be run directly on compressively-sensed data? In this work, we consider the question of accurately and efficiently computing low-rank matrix or tensor factorizations given data compressed via random projections. We examine the approach of first performing factorization in the compressed domain, and then reconstructing the original high-dimensional factors from the recovered (compressed) factors. In both the matrix and tensor settings, we establish conditions under which this natural approach will provably recover the original factors. While it is well-known that random projections preserve a number of geometric properties of a dataset, our work can be viewed as showing that they can also preserve certain solutions of non-convex, NP- Hard problems like non-negative matrix factorization. We support these theoretical results with experiments on synthetic data and demonstrate the practical applicability of compressed factorization on real-world gene expression and EEG time series datasets.