skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: RetinoSim: an Event-based Data Synthesis Tool for Neuromorphic Vision Architecture Exploration
Neuromorphic vision sensors (NVS), also known as silicon retina, capture aspects of the biological functionality of the mammalian retina by transducing incident photocurrent into an asynchronous stream of spikes that denote positive and negative changes in intensity. Current state-of-the-art devices are effectively leveraged in a variety of settings, but still suffer from distinct disadvantages as they are transitioned into high performance environments, such as space and autonomy. This paper provides an outline and demonstration of a data synthesis tool that gleans characteristics from the retina and allows the user to not only convert traditional video into neuromorphic data, but characterize design tradeoffs and inform future endeavors. Our retinomorphic model, RetinoSim, incorporates aspects of current NVS to allow for accurate data conversion while providing biologically-inspired features to improve upon this baseline. RetinoSim was implemented in MATLAB with a Graphical User Interface frontend to allow for expeditious video conversion and architecture exploration. We demonstrate that the tool can be used for real-time conversion for sparse event streams, exploration of frontend configurations, and duplication of existing event datasets.  more » « less
Award ID(s):
2020624
PAR ID:
10376898
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the International Conference on Neuromorphic Systems
Page Range / eLocation ID:
1 to 9
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The optic nerve transmits visual information to the brain as trains of discrete events, a low-power, low-bandwidth communication channel also exploited by silicon retina cameras. Extracting highfidelity visual input from retinal event trains is thus a key challenge for both computational neuroscience and neuromorphic engineering. Here, we investigate whether sparse coding can enable the reconstruction of high-fidelity images and video from retinal event trains. Our approach is analogous to compressive sensing, in which only a random subset of pixels are transmitted and the missing information is estimated via inference. We employed a variant of the Locally Competitive Algorithm to infer sparse representations from retinal event trains, using a dictionary of convolutional features optimized via stochastic gradient descent and trained in an unsupervised manner using a local Hebbian learning rule with momentum. We used an anatomically realistic retinal model with stochastic graded release from cones and bipolar cells to encode thumbnail images as spike trains arising from ON and OFF retinal ganglion cells. The spikes from each model ganglion cell were summed over a 32 msec time window, yielding a noisy rate-coded image. Analogous to how the primary visual cortex is postulated to infer features from noisy spike trains arising from the optic nerve, we inferred a higher-fidelity sparse reconstruction from the noisy rate-coded image using a convolutional dictionary trained on the original CIFAR10 database. To investigate whether a similar approachworks on non-stochastic data, we demonstrate that the same procedure can be used to reconstruct high-frequency video from the asynchronous events arising from a silicon retina camera moving through a laboratory environment. 
    more » « less
  2. In educational research, user-simulation interaction is gaining importance as it provides key insights into the effectiveness of simulation-based learning and immersive technologies. A common approach to study user-simulation interaction involves manually analyzing participant interaction in real-time or via video recordings, which is a tedious process. Surveys/questionnaires are also commonly used but are open to subjectivity and only provide qualitative data. The tool proposed in this paper, which we call Environmental Detection for User-Simulation Interaction Measurement (EDUSIM), is a publicly available video analytics tool that receives screen-recorded video input from participants interacting with a simulated environment and outputs statistical data related to time spent in pre-defined areas of interest within the simulation model. The proposed tool utilizes machine learning, namely multi-classification Convolutional Neural Networks, to provide an efficient, automated process for extracting such navigation data. EDUSIM also implements a binary classification model to flag imperfect input video data such as video frames that are outside the specified simulation environment. To assess the efficacy of the tool, we implement a set of immersive simulation-based learning (ISBL) modules in an undergraduate database course, where learners record their screens as they interact with a simulation to complete their ISBL assignments. We then use the EDUSIM tool to analyze the videos collected and compare the tool’s outputs with the expected results obtained by manually analyzing the videos. 
    more » « less
  3. In this paper, we examine private-sector collection and use of metadata and telemetry information and provide three main contributions: First, we lay out the extent to which “non-content”—the hidden parts of Internet communications (aspects the user does not explicitly enter) and telemetry—are highly revelatory of personal behavior. We show that, privacy policies notwithstanding, users rarely know that the metadata and telemetry information is being collected and almost never know the uses to which it is being put. Second, we show that consumers, even if they knew the uses to which this type of personal information were being put, lack effective means to control the use of this type of data. The standard tool of notice-and-choice has well known problems, including the user’s lack of information with which to make a choice; and then, even if the user had sufficient information, doing so is not practical.49 These are greatly exacerbated by the nature of the interchanges for communications metadata and telemetry information. Each new transmission—each click on an internal link on a webpage, for example—may carry different implications for a user in terms of privacy. The current regimen, notice-and-choice, presents a completely unworkable set of requests for a user, who could well be responding many times a minute regarding whether to allow the use of metadata beyond the purposes of content delivery and display. This is especially the case for telemetry, where the ability to understand both present and future use of the data provided from the sensors requires a deeper understanding of what information these devices can provide than anyone but a trained engineer would know. Third, while there has been academic and industry research on telemetry’s use, there has been little exploration of the policy and legal implications stemming from that use. We provide this factor, while at the same time addressing the closely related issues raised by industry’s use of communications metadata to track user interests and behavior 
    more » « less
  4. In this paper, we examine private-sector collection and use of metadata and telemetry information and provide three main contributions: First, we lay out the extent to which “non-content”—the hidden parts of Internet communications (aspects the user does not explicitly enter) and telemetry—are highly revelatory of personal behavior. We show that, privacy policies notwithstanding, users rarely know that the metadata and telemetry information is being collected and almost never know the uses to which it is being put. Second, we show that consumers, even if they knew the uses to which this type of personal information were being put, lack effective means to control the use of this type of data. The standard tool of notice-and-choice has well known problems, including the user’s lack of information with which to make a choice; and then, even if the user had sufficient information, doing so is not practical.49 These are greatly exacerbated by the nature of the interchanges for communications metadata and telemetry information. Each new transmission—each click on an internal link on a webpage, for example—may carry different implications for a user in terms of privacy. The current regimen, notice-and-choice, presents a completely unworkable set of requests for a user, who could well be responding many times a minute regarding whether to allow the use of metadata beyond the purposes of content delivery and display. This is especially the case for telemetry, where the ability to understand both present and future use of the data provided from the sensors requires a deeper understanding of what information these devices can provide than anyone but a trained engineer would know. Third, while there has been academic and industry research on telemetry’s use, there has been little exploration of the policy and legal implications stemming from that use. We provide this factor, while at the same time addressing the closely related issues raised by industry’s use of communications metadata to track user interests and behavior. 
    more » « less
  5. Abstract MotivationTools for pairwise alignments between 3D structures of proteins are of fundamental importance for structural biology and bioinformatics, enabling visual exploration of evolutionary and functional relationships. However, the absence of a user-friendly, browser-based tool for creating alignments and visualizing them at both 1D sequence and 3D structural levels makes this process unnecessarily cumbersome. ResultsWe introduce a novel pairwise structure alignment tool (rcsb.org/alignment) that seamlessly integrates into the RCSB Protein Data Bank (RCSB PDB) research-focused RCSB.org web portal. Our tool and its underlying application programming interface (alignment.rcsb.org) empowers users to align several protein chains with a reference structure by providing access to established alignment algorithms (FATCAT, CE, TM-align, or Smith–Waterman 3D). The user-friendly interface simplifies parameter setup and input selection. Within seconds, our tool enables visualization of results in both sequence (1D) and structural (3D) perspectives through the RCSB PDB RCSB.org Sequence Annotations viewer and Mol* 3D viewer, respectively. Users can effortlessly compare structures deposited in the PDB archive alongside more than a million incorporated Computed Structure Models coming from the ModelArchive and AlphaFold DB. Moreover, this tool can be used to align custom structure data by providing a link/URL or uploading atomic coordinate files directly. Importantly, alignment results can be bookmarked and shared with collaborators. By bridging the gap between 1D sequence and 3D structures of proteins, our tool facilitates deeper understanding of complex evolutionary relationships among proteins through comprehensive sequence and structural analyses. Availability and implementationThe alignment tool is part of the RCSB PDB research-focused RCSB.org web portal and available at rcsb.org/alignment. Programmatic access is available via alignment.rcsb.org. Frontend code has been published at github.com/rcsb/rcsb-pecos-app. Visualization is powered by the open-source Mol* viewer (github.com/molstar/molstar and github.com/molstar/rcsb-molstar) plus the Sequence Annotations in 3D Viewer (github.com/rcsb/rcsb-saguaro-3d). 
    more » « less