skip to main content


Search for: All records

Creators/Authors contains: "Nakandala, Supun"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Background Hip-worn accelerometer cut-points have poor validity for assessing children’s sedentary time, which may partly explain the equivocal health associations shown in prior research. Improved processing/classification methods for these monitors would enrich the evidence base and inform the development of more effective public health guidelines. The present study aimed to develop and evaluate a novel computational method (CHAP-child) for classifying sedentary time from hip-worn accelerometer data. Methods Participants were 278, 8–11-year-olds recruited from nine primary schools in Melbourne, Australia with differing socioeconomic status. Participants concurrently wore a thigh-worn activPAL (ground truth) and hip-worn ActiGraph (test measure) during up to 4 seasonal assessment periods, each lasting up to 8 days. activPAL data were used to train and evaluate the CHAP-child deep learning model to classify each 10-s epoch of raw ActiGraph acceleration data as sitting or non-sitting, creating comparable information from the two monitors. CHAP-child was evaluated alongside the current practice 100 counts per minute (cpm) method for hip-worn ActiGraph monitors. Performance was tested for each 10-s epoch and for participant-season level sedentary time and bout variables (e.g., mean bout duration). Results Across participant-seasons, CHAP-child correctly classified each epoch as sitting or non-sitting relative to activPAL, with mean balanced accuracy of 87.6% (SD = 5.3%). Sit-to-stand transitions were correctly classified with mean sensitivity of 76.3% (SD = 8.3). For most participant-season level variables, CHAP-child estimates were within ± 11% (mean absolute percent error [MAPE]) of activPAL, and correlations between CHAP-child and activPAL were generally very large (> 0.80). For the current practice 100 cpm method, most MAPEs were greater than ± 30% and most correlations were small or moderate (≤ 0.60) relative to activPAL. Conclusions There was strong support for the concurrent validity of the CHAP-child classification method, which allows researchers to derive activPAL-equivalent measures of sedentary time, sit-to-stand transitions, and sedentary bout patterns from hip-worn triaxial ActiGraph data. Applying CHAP-child to existing datasets may provide greater insights into the potential impacts and influences of sedentary time in children. 
    more » « less
  2. Background : Hip-worn accelerometers are commonly used, but data processed using the 100 counts per minute cut point do not accurately measure sitting patterns. We developed and validated a model to accurately classify sitting and sitting patterns using hip-worn accelerometer data from a wide age range of older adults. Methods : Deep learning models were trained with 30-Hz triaxial hip-worn accelerometer data as inputs and activPAL sitting/nonsitting events as ground truth. Data from 981 adults aged 35–99 years from cohorts in two continents were used to train the model, which we call CHAP-Adult (Convolutional Neural Network Hip Accelerometer Posture-Adult). Validation was conducted among 419 randomly selected adults not included in model training. Results : Mean errors (activPAL − CHAP-Adult) and 95% limits of agreement were: sedentary time −10.5 (−63.0, 42.0) min/day, breaks in sedentary time 1.9 (−9.2, 12.9) breaks/day, mean bout duration −0.6 (−4.0, 2.7) min, usual bout duration −1.4 (−8.3, 5.4) min, alpha .00 (−.04, .04), and time in ≥30-min bouts −15.1 (−84.3, 54.1) min/day. Respective mean (and absolute) percent errors were: −2.0% (4.0%), −4.7% (12.2%), 4.1% (11.6%), −4.4% (9.6%), 0.0% (1.4%), and 5.4% (9.6%). Pearson’s correlations were: .96, .92, .86, .92, .78, and .96. Error was generally consistent across age, gender, and body mass index groups with the largest deviations observed for those with body mass index ≥30 kg/m 2 . Conclusions : Overall, these strong validation results indicate CHAP-Adult represents a significant advancement in the ambulatory measurement of sitting and sitting patterns using hip-worn accelerometers. Pending external validation, it could be widely applied to data from around the world to extend understanding of the epidemiology and health consequences of sitting. 
    more » « less
  3. Deep learning (DL) is revolutionizing many fields. However, there is a major bottleneck for the wide adoption of DL: the pain of model selection , which requires exploring a large config space of model architecture and training hyper-parameters before picking the best model. The two existing popular paradigms for exploring this config space pose a false dichotomy. AutoML-based model selection explores configs with high-throughput but uses human intuition minimally. Alternatively, interactive human-in-the-loop model selection completely relies on human intuition to explore the config space but often has very low throughput. To mitigate the above drawbacks, we propose a new paradigm for model selection that we call intermittent human-in-the-loop model selection . In this demonstration, we will showcase our approach using five real-world DL model selection workloads. A short video of our demonstration can be found here: https://youtu.be/K3THQy5McXc. 
    more » « less
  4. null (Ed.)
    Deep learning now offers state-of-the-art accuracy for many prediction tasks. A form of deep learning called deep convolutional neural networks (CNNs) are especially popular on image, video, and time series data. Due to its high computational cost, CNN inference is often a bottleneck in analytics tasks on such data. Thus, a lot of work in the computer architecture, systems, and compilers communities study how to make CNN inference faster. In this work, we show that by elevating the abstraction level and re-imagining CNN inference as queries , we can bring to bear database-style query optimization techniques to improve CNN inference efficiency. We focus on tasks that perform CNN inference repeatedly on inputs that are only slightly different . We identify two popular CNN tasks with this behavior: occlusion-based explanations (OBE) and object recognition in videos (ORV). OBE is a popular method for “explaining” CNN predictions. It outputs a heatmap over the input to show which regions (e.g., image pixels) mattered most for a given prediction. It leads to many re-inference requests on locally modified inputs. ORV uses CNNs to identify and track objects across video frames. It also leads to many re-inference requests. We cast such tasks in a unified manner as a novel instance of the incremental view maintenance problem and create a comprehensive algebraic framework for incremental CNN inference that reduces computational costs. We produce materialized views of features produced inside a CNN and connect them with a novel multi-query optimization scheme for CNN re-inference. Finally, we also devise novel OBE-specific and ORV-specific approximate inference optimizations exploiting their semantics. We prototype our ideas in Python to create a tool called Krypton that supports both CPUs and GPUs. Experiments with real data and CNNs show that Krypton reduces runtimes by up to 5× (respectively, 35×) to produce exact (respectively, high-quality approximate) results without raising resource requirements. 
    more » « less
  5. null (Ed.)
    Deep Convolutional Neural Networks (CNNs) now match human accuracy in many image prediction tasks, resulting in a growing adoption in e-commerce, radiology, and other domains. Naturally, "explaining" CNN predictions is a key concern for many users. Since the internal workings of CNNs are unintuitive for most users, occlusion-based explanations (OBE) are popular for understanding which parts of an image matter most for a prediction. One occludes a region of the image using a patch and moves it around to produce a heatmap of changes to the prediction probability. This approach is computationally expensive due to the large number of re-inference requests produced, which wastes time and raises resource costs. We tackle this issue by casting the OBE task as a new instance of the classical incremental view maintenance problem. We create a novel and comprehensive algebraic framework for incremental CNN inference combining materialized views with multi-query optimization to reduce computational costs. We then present two novel approximate inference optimizations that exploit the semantics of CNNs and the OBE task to further reduce runtimes. We prototype our ideas in a tool we call Krypton. Experiments with real data and CNNs show that Krypton reduces runtimes by up to 5x (resp. 35x) to produce exact (resp. high-quality approximate) results without raising resource requirements. 
    more » « less
  6. null (Ed.)
  7. null (Ed.)
  8. null (Ed.)
  9. null (Ed.)