This content will become publicly available on July 16, 2025
- PAR ID:
- 10534998
- Publisher / Repository:
- American Geophysical Union
- Date Published:
- Journal Name:
- Geophysical Research Letters
- Volume:
- 51
- Issue:
- 13
- ISSN:
- 0094-8276
- Page Range / eLocation ID:
- e2024GL109353
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Visual odometry (VO) is a method used to estimate self-motion of a mobile robot using visual sensors. Unlike odometry based on integrating differential measurements that can accumulate errors, such as inertial sensors or wheel encoders, VO is not compromised by drift. However, image-based VO is computationally demanding, limiting its application in use cases with low-latency, low-memory and low-energy requirements. Neuromorphic hardware offers low-power solutions to many vision and artificial intelligence problems, but designing such solutions is complicated and often has to be assembled from scratch. Here we propose the use of vector symbolic architecture (VSA) as an abstraction layer to design algorithms compatible with neuromorphic hardware. Building from a VSA model for scene analysis, described in our companion paper, we present a modular neuromorphic algorithm that achieves state-of-the-art performance on two-dimensional VO tasks. Specifically, the proposed algorithm stores and updates a working memory of the presented visual environment. Based on this working memory, a resonator network estimates the changing location and orientation of the camera. We experimentally validate the neuromorphic VSA-based approach to VO with two benchmarks: one based on an event-camera dataset and the other in a dynamic scene with a robotic task.more » « less
-
Abstract More than three dozen red sprites were captured above Hurricane Matthew on the nights of 1 and 2 October 2016 as it passed to the north of Venezuela after undergoing rapid intensification. Analyses using broadband magnetic fields indicate that all of the sprites were produced by positive cloud‐to‐ground (CG) strokes located within the outer rainbands as defined by relatively cold cloud top brightness temperatures (≤194 K). Negative CG strokes with impulse charge transfers exceeding the threshold of sprite production also existed, but the timescale of the charge transfer was not sufficiently long to develop streamers. The reported observations are contrary to the finding of the Imager of Sprites/Upper Atmospheric Lightning showing that sprites are preferentially produced by negative strokes in the same geographic region. Further ground‐based observations are desired to obtain additional insights into the convective regimes associated with the dominance of negative sprites in many oceanic and coastal thunderstorms.
-
Bitcoin, Ethereum and other blockchain-based cryptocurrencies, as deployed today, cannot support more than several transactions per second. Off-chain payment channels, a “layer 2” solution, are a leading approach for cryptocurrency scaling. They enable two mutually distrustful parties to rapidly send payments between each other and can be linked together to form a payment network, such that payments between any two parties can be routed through the network along a path that connects them. We propose a novel payment channel protocol, called Sprites. The main advantage of Sprites compared with earlier protocols is a reduced “collateral cost,” meaning the amount of money × time that must be locked up before disputes are settled. In the Lightning Network and Raiden, a payment across a path of ` channels requires locking up collateral for Θ(`∆) time, where ∆ is the time to commit an on-chain transaction; every additional node on the path forces an increase in lock time. The Sprites construction provides a constant lock time, reducing the overall collateral cost to Θ(` + ∆). Our presentation of the Sprites protocol is also modular, making use of a generic state channel abstraction. Finally, Sprites improves on prior payment channel constructions by supporting partial withdrawals and deposits without any on-chain transactions.more » « less
-
Neuromorphic vision sensors (NVS), also known as silicon retina, capture aspects of the biological functionality of the mammalian retina by transducing incident photocurrent into an asynchronous stream of spikes that denote positive and negative changes in intensity. Current state-of-the-art devices are effectively leveraged in a variety of settings, but still suffer from distinct disadvantages as they are transitioned into high performance environments, such as space and autonomy. This paper provides an outline and demonstration of a data synthesis tool that gleans characteristics from the retina and allows the user to not only convert traditional video into neuromorphic data, but characterize design tradeoffs and inform future endeavors. Our retinomorphic model, RetinoSim, incorporates aspects of current NVS to allow for accurate data conversion while providing biologically-inspired features to improve upon this baseline. RetinoSim was implemented in MATLAB with a Graphical User Interface frontend to allow for expeditious video conversion and architecture exploration. We demonstrate that the tool can be used for real-time conversion for sparse event streams, exploration of frontend configurations, and duplication of existing event datasets.more » « less
-
Neuromorphic computing systems promise high energy efficiency and low latency. In particular, when integrated with neuromorphic sensors, they can be used to produce intelligent systems for a broad range of applications. An event‐based camera is such a neuromorphic sensor, inspired by the sparse and asynchronous spike representation of the biological visual system. However, processing the event data requires either using expensive feature descriptors to transform spikes into frames, or using spiking neural networks (SNNs) that are expensive to train. In this work, a neural network architecture is proposed, reservoir nodes‐enabled neuromorphic vision sensing network (RN‐Net), based on dynamic temporal encoding by on‐sensor reservoirs and simple deep neural network (DNN) blocks. The reservoir nodes enable efficient temporal processing of asynchronous events by leveraging the native dynamics of the node devices, while the DNN blocks enable spatial feature processing. Combining these blocks in a hierarchical structure, the RN‐Net offers efficient processing for both local and global spatiotemporal features. RN‐Net executes dynamic vision tasks created by event‐based cameras at the highest accuracy reported to date at one order of magnitude smaller network size. The use of simple DNN and standard backpropagation‐based training rules further reduces implementation and training costs.