skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Spatial-Spectral Holographic Mode Demultiplexing, Dispersion Compensation, and Routing
Mode group demultiplexing, modal dispersion com- pensation, and header-based modal-channel self routing in mul- timode fiber networks is enabled using an all-optical signal processing technique based on spatial-spectral holography in order to achieve transparent all-optical networks.  more » « less
Award ID(s):
1817174
PAR ID:
10658857
Author(s) / Creator(s):
 ;  
Publisher / Repository:
IEEE
Date Published:
Page Range / eLocation ID:
1 to 4
Subject(s) / Keyword(s):
spatial-spectral holography, multi-mode fibers, modal dispersion, optical signal processing
Format(s):
Medium: X
Location:
Mantova Italy
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Cross-modal retrieval aims to learn discriminative and modal-invariant features for data from different modalities. Unlike the existing methods which usually learn from the features extracted by offline networks, in this paper, we pro- pose an approach to jointly train the components of cross- modal retrieval framework with metadata, and enable the network to find optimal features. The proposed end-to-end framework is updated with three loss functions: 1) a novel cross-modal center loss to eliminate cross-modal discrepancy, 2) cross-entropy loss to maximize inter-class variations, and 3) mean-square-error loss to reduce modality variations. In particular, our proposed cross-modal center loss minimizes the distances of features from objects belonging to the same class across all modalities. Extensive experiments have been conducted on the retrieval tasks across multi-modalities including 2D image, 3D point cloud and mesh data. The proposed framework significantly outperforms the state-of-the-art methods for both cross-modal and in-domain retrieval for 3D objects on the ModelNet10 and ModelNet40 datasets. 
    more » « less
  2. Abstract Diffractive optical neural networks have shown promising advantages over electronic circuits for accelerating modern machine learning (ML) algorithms. However, it is challenging to achieve fully programmable all‐optical implementation and rapid hardware deployment. Here, a large‐scale, cost‐effective, complex‐valued, and reconfigurable diffractive all‐optical neural networks system in the visible range is demonstrated based on cascaded transmissive twisted nematic liquid crystal spatial light modulators. The employment of categorical reparameterization technique creates a physics‐aware training framework for the fast and accurate deployment of computer‐trained models onto optical hardware. Such a full stack of hardware and software enables not only the experimental demonstration of classifying handwritten digits in standard datasets, but also theoretical analysis and experimental verification of physics‐aware adversarial attacks onto the system, which are generated from a complex‐valued gradient‐based algorithm. The detailed adversarial robustness comparison with conventional multiple layer perceptrons and convolutional neural networks features a distinct statistical adversarial property in diffractive optical neural networks. The developed full stack of software and hardware provides new opportunities of employing diffractive optics in a variety of ML tasks and in the research on optical adversarial ML. 
    more » « less
  3. We investigate the role of representations and architectures for classifying 3D shapes in terms of their computational efficiency, generalization, and robustness to adversarial transformations. By varying the number of training examples and employing cross-modal transfer learning we study the role of initialization of existing deep architectures for 3D shape classification. Our analysis shows that multiview methods continue to offer the best generalization even without pretraining on large labeled image datasets, and even when trained on simplified inputs such as binary silhouettes. Furthermore, the performance of voxel-based 3D convolutional networks and point-based architectures can be improved via cross-modal transfer from image representations. Finally, we analyze the robustness of 3D shape classifiers to adversarial transformations and present a novel approach for generating adversarial perturbations of a 3D shape for multiview classifiers using a differentiable renderer. We find that point-based networks are more robust to point position perturbations while voxel-based and multiview networks are easily fooled with the addition of imperceptible noise to the input. 
    more » « less
  4. null (Ed.)
    Structural health monitoring of complex structures is often limited by restricted accessibility to locations of interest within the structure and availability of operational loads. In this work, a novel output-only virtual sensing scheme is proposed. This scheme involves the implementation of the modal expansion in an augmented Kalman filter. Performance of the proposed scheme is compared with two existing methods. Method 1 relies on a finite element model updating, batch data processing, and modal expansion (MUME) procedure. Method 2 employs a recursive sequential estimation algorithm, which feeds a substructure model of the instrumented system into an Augmented Kalman Filter (AKF). The new scheme referred to as Method 3 (ME-AKF), implements strain estimates generated via Modal Expansion into an AKF as virtual measurements. To demonstrate the applicability of the aforementioned methods, a rollercoaster connection was instrumented with accelerometers, strain rosettes, and an optical sensor. A comparison of estimated dynamic strain response at unmeasured locations using three alternative schemes is presented. Although acceleration measurements are used indirectly for model updating, the response-only methods presented in this research use only measurements from strain rosettes for strain history predictions and require no prior knowledge of input forces. Predicted strains using all methods are shown to sufficiently predict the measured strain time histories from a control location and lie within a 95% confidence interval calculated based on modal expansion equations. In addition, the proposed ME-AKF method shows improvement in strain predictions at unmeasured locations without the necessity of batch data processing. The proposed scheme shows high potential for real-time dynamic estimation of the strain and stress state of complex structures at unmeasured locations. 
    more » « less
  5. Representation learning is a challenging, but essential task in audiovisual learning. A key challenge is to generate strong cross-modal representations while still capturing discriminative information contained in unimodal features. Properly capturing this information is important to increase accuracy and robustness in audio-visual tasks. Focusing on emotion recognition, this study proposes novel cross-modal ladder networks to capture modality-specific in-formation while building strong cross-modal representations. Our method utilizes representations from a backbone network to implement unsupervised auxiliary tasks to reconstruct intermediate layer representations across the acoustic and visual networks. The skip connections between the cross-modal encoder and decoder provide powerful modality-specific and multimodal representations for emotion recognition. Our model on the CREMA-D corpus achieves high performance with precision, recall, and F1 scores over 80% on a six-class problem. 
    more » « less