skip to main content


Title: Estimating outdoor temperature from CPU temperature for IoT applications in agriculture
In the paper, we investigate using CPU temperature from small, low cost, single-board computers to predict out- door temperature in IoT-based precision agricultural settings. Temperature is a key metric in these settings that is used to in- form and actuate farm operations such as irrigation schedul- ing, frost damage mitigation, and greenhouse management. Using cheap single-board computers as temperature sensors can drive down the cost of sensing in these applications and make it possible to monitor a large number of micro-climates concurrently. We have developed a system in which devices communicate their CPU measurements to an on-farm edge cloud. The edge cloud uses a combination of calibration, smoothing (noise removal), and linear regression to make pre- dictions of the outdoor temperature at each device. We eval- uate the accuracy of this approach for different temperature sensors, devices, and locations, as well as different training and calibration durations.  more » « less
Award ID(s):
1703560
NSF-PAR ID:
10091226
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
International Conference on the Internet of Things
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With the proliferation of low-cost sensors and the Internet of Things, the rate of producing data far exceeds the compute and storage capabilities of today’s infrastructure. Much of this data takes the form of time series, and in response, there has been increasing interest in the creation of time series archives in the last decade, along with the development and deployment of novel analysis methods to process the data. The general strategy has been to apply a plurality of similarity search mechanisms to various subsets and subsequences of time series data in order to identify repeated patterns and anomalies; however, the computational demands of these approaches renders them incompatible with today’s power-constrained embedded CPUs. To address this challenge, we present FA-LAMP, an FPGA-accelerated implementation of the Learned Approximate Matrix Profile (LAMP) algorithm, which predicts the correlation between streaming data sampled in real-time and a representative time series dataset used for training. FA-LAMP lends itself as a real-time solution for time series analysis problems such as classification. We present the implementation of FA-LAMP on both edge- and cloud-based prototypes. On the edge devices, FA-LAMP integrates accelerated computation as close as possible to IoT sensors, thereby eliminating the need to transmit and store data in the cloud for posterior analysis. On the cloud-based accelerators, FA-LAMP can execute multiple LAMP models on the same board, allowing simultaneous processing of incoming data from multiple data sources across a network. LAMP employs a Convolutional Neural Network (CNN) for prediction. This work investigates the challenges and limitations of deploying CNNs on FPGAs using the Xilinx Deep Learning Processor Unit (DPU) and the Vitis AI development environment. We expose several technical limitations of the DPU, while providing a mechanism to overcome them by attaching custom IP block accelerators to the architecture. We evaluate FA-LAMP using a low-cost Xilinx Ultra96-V2 FPGA as well as a cloud-based Xilinx Alveo U280 accelerator card and measure their performance against a prototypical LAMP deployment running on a Raspberry Pi 3, an Edge TPU, a GPU, a desktop CPU, and a server-class CPU. In the edge scenario, the Ultra96-V2 FPGA improved performance and energy consumption compared to the Raspberry Pi; in the cloud scenario, the server CPU and GPU outperformed the Alveo U280 accelerator card, while the desktop CPU achieved comparable performance; however, the Alveo card offered an order of magnitude lower energy consumption compared to the other four platforms. Our implementation is publicly available at https://github.com/aminiok1/lamp-alveo. 
    more » « less
  2. Mobile sequencing technologies, including Oxford Nanopore’s MinION, MklC, and SmidgION, are bringing genomics in the palm of a hand, opening unprecedented new opportunities in clinical and ecological research and translational applications. While sequencers now need only a USB outlet and provide on-board preprocessing (e.g., base calling), the main data analysis phases are tied to an available broadband Internet connection and cloud computing. Yet the ubiquity of tablets and smartphones, along with their increase in computational power, makes them a perfect candidate for enabling mobile/edge mobile bioinformatics analytics. Also, in on site experimental settings tablets and smartphones are preferable to standard computers due to resilience to humidity or spills, and ease of sterilization. We here present an experimental study on power dissipation, aiming at reducing the battery consumption that currently impedes the execution of intensive bioinformatics analytics pipelines. In particular, we investigated the effects of assorted data structures (including hash tables, vectors, balanced trees, tries) employed in some of the most common tasks of a bioinformatics pipeline, the k- mer representation and counting. By employing a thermal camera, we show how different k-mer-handling data structures impact the power dissipation on a smartphone, finding that a cache-oblivious data structure reduces power dissipation (up to 26% better than others). In conclusion, the choice of data structures in mobile bioinformatics must consider not only computing efficiency (e.g., succinct data structures to reduce RAM usage), but also power consumption of mobile devices that heavily rely on batteries in order to function. 
    more » « less
  3. Abstract Sensitive dispersive readouts of single-electron devices (“gate reflectometry”) rely on one-port radio-frequency (RF) reflectometry to read out the state of the sensor. A standard practice in reflectometry measurements is to design an impedance transformer to match the impedance of the load to the characteristic impedance of the transmission line and thus obtain the best sensitivity and signal-to-noise ratio. This is particularly important for measuring large impedances, typical for dispersive readouts of single-electron devices because even a small mismatch will cause a strong signal degradation. When performing RF measurements, a calibration and error correction of the measurement apparatus must be performed in order to remove errors caused by unavoidable non-idealities of the measurement system. Lack of calibration makes optimizing a matching network difficult and ambiguous, and it also prevents a direct quantitative comparison between measurements taken of different devices or on different systems. We propose and demonstrate a simple straightforward method to design and optimize a pi matching network for readouts of devices with large impedance, $$Z \ge 1\hbox {M}\Omega$$ Z ≥ 1 M Ω . It is based on a single low temperature calibrated measurement of an unadjusted network composed of a single L-section followed by a simple calculation to determine a value of the “balancing” capacitor needed to achieve matching conditions for a pi network. We demonstrate that the proposed calibration/error correction technique can be directly applied at low temperature using inexpensive calibration standards. Using proper modeling of the matching networks adjusted for low temperature operation the measurement system can be easily optimized to achieve the best conditions for energy transfer and targeted bandwidth, and can be used for quantitative measurements of the device impedance. In this work we use gate reflectometry to readout the signal generated by arrays of parallel-connected Al-AlOx single-electron boxes. Such arrays can be used as a fast nanoscale voltage sensor for scanning probe applications. We perform measurements of sensitivity and bandwidth for various settings of the matching network connected to arrays and obtain strong agreement with the simulations. 
    more » « less
  4. Rehabilitation is a crucial process for patients suffering from motor disorders. The current practice is performing rehabilitation exercises under clinical expert supervision. New approaches are needed to allow patients to perform prescribed exercises at their homes and alleviate commuting requirements, expert shortages, and healthcare costs. Human joint estimation is a substantial component of these programs since it offers valuable visualization and feedback based on body movements. Camera-based systems have been popular for capturing joint motion. However, they have high-cost, raise serious privacy concerns, and require strict lighting and placement settings. We propose a millimeter-wave (mmWave)-based assistive rehabilitation system (MARS) for motor disorders to address these challenges. MARS provides a low-cost solution with a competitive object localization and detection accuracy. It first maps the 5D time-series point cloud from mmWave to a lower dimension. Then, it uses a convolution neural network (CNN) to estimate the accurate location of human joints. MARS can reconstruct 19 human joints and their skeleton from the point cloud generated by mmWave radar. We evaluate MARS using ten specific rehabilitation movements performed by four human subjects involving all body parts and obtain an average mean absolute error of 5.87 cm for all joint positions. To the best of our knowledge, this is the first rehabilitation movements dataset using mmWave point cloud. MARS is evaluated on the Nvidia Jetson Xavier-NX board. Model inference takes only 64 s and consumes 442 J energy. These results demonstrate the practicality of MARS on low-power edge devices. 
    more » « less
  5. Edge cloud solutions that bring the cloud closer to the sensors can be very useful to meet the low latency requirements of many Internet-of-Things (IoT) applications. However, IoT traffic can also be intermittent, so running applications constantly can be wasteful. Therefore, having a serverless edge cloud that is responsive and provides low-latency features is a very attractive option for a resource and cost-efficient IoT application environment.In this paper, we discuss the key components needed to support IoT traffic in the serverless edge cloud and identify the critical challenges that make it difficult to directly use existing serverless solutions such as Knative, for IoT applications. These include overhead from heavyweight components for managing the overall system and software adaptors for communication protocol translation used in off-the-shelf serverless platforms that are designed for large-scale centralized clouds. The latency imposed by ‘cold start’ is a further deterrent.To address these challenges we redesign several components of the Knative serverless framework. We use a streamlined protocol adaptor to leverage the MQTT IoT protocol in our serverless framework for IoT event processing. We also create a novel, event-driven proxy based on the extended Berkeley Packet Filter (eBPF), to replace the regular heavyweight Knative queue proxy. Our preliminary experimental results show that the event-driven proxy is a suitable replacement for the queue proxy in an IoT serverless environment and results in lower CPU usage and a higher request throughput. 
    more » « less