skip to main content


Title: Hardware Trigger Processor for the MDT System
We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.  more » « less
Award ID(s):
1830832
NSF-PAR ID:
10232684
Author(s) / Creator(s):
Date Published:
Journal Name:
Pos proceedings of science
Volume:
313
ISSN:
1824-8039
Page Range / eLocation ID:
148
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent advancements in energy-harvesting techniques provide an alternative to batteries for resource constrained IoT devices and lead to a new computing paradigm, the intermittent computing model. In this model, a software module continues its execution from where it left off when an energy shortage occurred. Enforcing security of an intermittent software module is challenging because its power-off state has to be protected from a malicious adversary in addition to its power-on state, while the security mechanisms put in place must have a low overhead on the performance, resource consumption, and cost of a device. In this paper, we propose SIA (Secure Intermittent Architecture), a security architecture for resource-constrained IoT devices. SIA leverages low-cost security features available in commercial off-the-shelf microcontrollers to protect both the power-on and power-off state of an intermittent software module. Therefore, SIA enables a host of secure intermittent computing applications such as self-attestation, remote attestation, and secure communication. Moreover, our architecture provides confidentiality and integrity guarantees to an intermittent computing module at no cost compared to previous approaches in the literature that impose significant overheads. The salient characteristic of SIA is that it does not require any hardware modifications, and hence, it can be directly applied to existing IoT devices. We implemented and evaluated SIA on a resource-constrained IoT device based on an MSP430 processor. Besides being secure, SIA is simple and efficient. We confirm the feasibility of SIA for resource-constrained IoT devices with experimental results of several intermittent computing applications. Our prototype implementation outperforms by two to three orders of magnitude the secure intermittent computing solution of Suslowicz et al. presented at IGSC 2018. 
    more » « less
  2. Recent advancements in energy-harvesting techniques provide an alternative to batteries for resource-constrained IoT devices and lead to a new computing paradigm, the intermittent computing model. In this model, a software module continues its execution from where it left off when an energy shortage occurred. Enforcing security of an intermittent software module is challenging because its power-off state has to be protected from a malicious adversary in addition to its power-on state, while the security mechanisms put in place must have a low overhead on the performance, resource consumption, and cost of a device. In this paper, we propose SIA (Secure Intermittent Architecture), a security architecture for resource-constrained IoT devices. SIA leverages low-cost security features available in commercial off-the-shelf microcontrollers to protect both the power-on and power-off state of an intermittent software module. Therefore, SIA enables a host of secure intermittent computing applications such as self-attestation, remote attestation, and secure communication. Moreover, our architecture provides confidentiality and integrity guarantees to an intermittent computing module at no cost compared to previous approaches in the literature that impose significant overheads. The salient characteristic of SIA is that it does not require any hardware modifications, and hence, it can be directly applied to existing IoT devices. We implemented and evaluated SIA on a resource-constrained IoT device based on an MSP430 processor. Besides being secure, SIA is simple and efficient. We confirm the feasibility of SIA for resource-constrained IoT devices with experimental results of several intermittent computing applications. Our prototype implementation outperforms by two to three orders of magnitude the secure intermittent computing solution of Suslowicz et al. presented at IGSC 2018. 
    more » « less
  3. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less
  4. Over the next decade, processor design will encounter a number of challenges. The ongoing miniaturization of semiconductor manufacturing technologies that has enabled the integration of hundreds to thousands of processing cores on a single chip is pushing the limits of physical laws. The fabrication process has also grown more complex and globalized with widespread use of third-party IPs (intellectual properties). This development ecosystem has complicated the security and trust view of processors. Some of the pressing processor architecture design questions are: (1) how to use reconfiguration and redundancy to improve reliability without introducing additional and potentially insecure system states, (2) what analytical models lend themselves best to the joint implementation of reliability and security in these systems, and (3) how to optimally and securely share resources and data among processing elements with high degree of reliability. In this work, we present and discuss (1) principal reliability approaches - error correction code, modular redundancy, (2) processor architecture specific reliability, (3) major secure processor architectures. We also highlight key features of a small representative class of the secure and reliable architectures. 
    more » « less
  5. As a model of recurrent spiking neural networks, the Liquid State Machine (LSM) offers a powerful brain-inspired computing platform for pattern recognition and machine learning applications. While operated by processing neural spiking activities, the LSM naturally lends itself to an efficient hardware implementation via exploration of typical sparse firing patterns emerged from the recurrent neural network and smart processing of computational tasks that are orchestrated by different firing events at runtime. We explore these opportunities by presenting a LSM processor architecture with integrated on-chip learning and its FPGA implementation. Our LSM processor leverage the sparsity of firing activities to allow for efficient event-driven processing and activity-dependent clock gating. Using the spoken English letters adopted from the TI46 [1] speech recognition corpus as a benchmark, we show that the proposed FPGA-based neural processor system is up to 29% more energy efficient than a baseline LSM processor with little extra hardware overhead. 
    more » « less