skip to main content

Search for: All records

Creators/Authors contains: "Zhang, L."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Piezoelectric surface acoustic waves (SAWs) are powerful for investigating and controlling elementary and collective excitations in condensed matter. In semiconductor two-dimensional electron systems SAWs have been used to reveal the spatial and temporal structure of electronic states, produce quantized charge pumping, and transfer quantum information. In contrast to semiconductors, electrons trapped above the surface of superfluid helium form an ultra-high mobility, two-dimensional electron system home to strongly-interacting Coulomb liquid and solid states, which exhibit non-trivial spatial structure and temporal dynamics prime for SAW-based experiments. Here we report on the coupling of electrons on helium to an evanescent piezoelectric SAW.more »We demonstrate precision acoustoelectric transport of as little as ~0.01% of the electrons, opening the door to future quantized charge pumping experiments. We also show SAWs are a route to investigating the high-frequency dynamical response, and relaxational processes, of collective excitations of the electronic liquid and solid phases of electrons on helium.« less
    Free, publicly-accessible full text available December 1, 2022
  2. Free, publicly-accessible full text available September 1, 2022
  3. The success of supervised learning requires large-scale ground truth labels which are very expensive, time- consuming, or may need special skills to annotate. To address this issue, many self- or un-supervised methods are developed. Unlike most existing self-supervised methods to learn only 2D image features or only 3D point cloud features, this paper presents a novel and effective self-supervised learning approach to jointly learn both 2D image features and 3D point cloud features by exploiting cross-modality and cross-view correspondences without using any human annotated labels. Specifically, 2D image features of rendered images from different views are extracted by a 2Dmore »convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network. Two types of features are fed into a two-layer fully connected neural network to estimate the cross-modality correspondence. The three networks are jointly trained (i.e. cross-modality) by verifying whether two sampled data of different modalities belong to the same object, meanwhile, the 2D convolutional neural network is additionally optimized through minimizing intra-object distance while maximizing inter-object distance of rendered images in different views (i.e. cross-view). The effectiveness of the learned 2D and 3D features is evaluated by transferring them on five different tasks including multi-view 2D shape recognition, 3D shape recognition, multi-view 2D shape retrieval, 3D shape retrieval, and 3D part-segmentation. Extensive evaluations on all the five different tasks across different datasets demonstrate strong generalization and effectiveness of the learned 2D and 3D features by the proposed self-supervised method.« less
    Free, publicly-accessible full text available June 18, 2022
  4. Free, publicly-accessible full text available May 3, 2022
  5. Free, publicly-accessible full text available July 1, 2022
  6. Free, publicly-accessible full text available April 1, 2022
  7. Free, publicly-accessible full text available July 1, 2022