Recent advancements in low power and low noise front-end amplifiers have made it possible to support high-speed data transmission within the deep roll-off regions of conventional wireline channels. Despite being primarily limited by inter-symbol-interference (ISI), these legacy channels also require power-consuming front-end amplifiers due to increased insertion-loss at high frequencies. Wireline-like broadband channels, such as proximity communication and human-body-communication (HBC), as well as multi-lane, densely-packed channels, are further constrained by their high loss and unique channel responses which cause the received signal to be noise-limited. To address these challenges, this paper proposes the use of a discrete-time integrating amplifier as a low power <1 pJ/b using 65nm CMOS up to 5-6 Gb/s) alternative to traditional continuous-time front-end amplifiers. Integrating amplifiers also reduce the effects of noise due to its inherent current integrating process. The paper provides a detailed mathematical analysis of gain of two conventional and three novel and improved integrating amplifiers, accurate input referred noise estimations, signal-to-noise ratio, and a comparison of the integrating amplifier’s performance with that of a low-noise amplifier. The analysis identifies the most optimum integrator architecture and provides comparison with simulated results. This paper also develops theoretical expressions and provides in-depth understanding of input referred noise, while supporting them by simulations using 65nm CMOS technology node. Finally, a comparative analysis between low-noise amplifier and discrete-time integrating amplifier is presented to demonstrate power and noise benefits for both legacy and wireline-like channels, while providing an easier design space as integrator provides two-dimensional controllability for gain.
more »
« less
Explainable Multivariate Time Series Classification: A Deep Neural Network Which Learns to Attend to Important Variables As Well As Time Intervals
More Like this
-
-
Graphics processing units (GPUs) manufactured by NVIDIA continue to dominate many fields of research, including real-time GPU-management. NVIDIA’s status as a key enabling technology for deep learning and image processing makes this unsurprising, especially when combined with the company’s push into embedded, safety-critical domains like autonomous driving. NVIDIA’s primary competitor, AMD, has received comparatively little attention, due in part to few embedded offerings and a lack of support from popular deep-learning toolkits. Recently, however, AMD’s ROCm (Radeon Open Compute) software platform was made available to address at least the second of these two issues, but is ROCm worth the attention of safety-critical software developers? In order to answer this question, this paper explores the features and pitfalls of AMD GPUs, focusing on contrasting details with NVIDIA’s GPU hardware and software. We argue that an open software stack such as ROCm may be able to provide much-needed flexibility and reproducibility in the context of real-time GPU research, where new algorithmic or analysis techniques should typically remain agnostic to the underlying GPU architecture. In support of this claim, we summarize how closed-source platforms have obstructed prior research using NVIDIA GPUs, and then demonstrate that AMD may be a viable alternative by modifying components of the ROCm software stack to implement spatial partitioning. Finally, we present a case study using the PyTorch deep-learning framework that demonstrates the impact such modifications can have on complex real-world software.more » « less
-
The management of blood glucose levels is critical in the care of Type 1 diabetes subjects. In extremes, high or low levels of blood glucose are fatal. To avoid such adverse events, there is the development and adoption of wearable technologies that continuously monitor blood glucose and administer insulin. This technology allows subjects to easily track their blood glucose levels with early intervention without the need for hospital visits. The data collected from these sensors is an excellent candidate for the application of machine learning algorithms to learn patterns and predict future values of blood glucose levels. In this study, we developed artificial neural network algorithms based on the OhioT1DM training dataset that contains data on 12 subjects. The dataset contains features such as subject identifiers, continuous glucose monitoring data obtained in 5 minutes intervals, insulin infusion rate, etc. We developed individual models, including LSTM, BiLSTM, Convolutional LSTMs, TCN, and sequence-to-sequence models. We also developed transfer learning models based on the most important features of the data, as identified by a gradient boosting algorithm. These models were evaluated on the OhioT1DM test dataset that contains 6 unique subject’s data. The model with the lowest RMSE values for the 30- and 60-minutes was selected as the best performing model. Our result shows that sequence-to-sequence BiLSTM performed better than the other models. This work demonstrates the potential of artificial neural networks algorithms in the management of Type 1 diabetes.more » « less
An official website of the United States government

