skip to main content


Title: Neural Fields in Visual Computing and Beyond
Abstract

Recent advances in machine learning have led to increased interest in solving visual computing problems using methods that employ coordinate‐based neural networks. These methods, which we callneural fields, parameterize physical properties of scenes or objects across space and time. They have seen widespread success in problems such as 3D shape and image synthesis, animation of human bodies, 3D reconstruction, and pose estimation. Rapid progress has led to numerous papers, but a consolidation of the discovered knowledge has not yet emerged. We provide context, mathematical grounding, and a review of over 250 papers in the literature on neural fields. InPart I, we focus on neural field techniques by identifying common components of neural field methods, including different conditioning, representation, forward map, architecture, and manipulation methods. InPart II, we focus on applications of neural fields to different problems in visual computing, and beyond (e.g., robotics, audio). Our review shows the breadth of topics already covered in visual computing, both historically and in current incarnations, and highlights the improved quality, flexibility, and capability brought by neural field methods. Finally, we present a companion website that acts as a living database that can be continually updated by the community.

 
more » « less
NSF-PAR ID:
10367766
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
41
Issue:
2
ISSN:
0167-7055
Page Range / eLocation ID:
p. 641-676
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Machine learning is an expanding field with an ever-increasing role in everyday life, with its utility in the industrial, agricultural, and medical sectors being undeniable. Recently, this utility has come in the form of machine learning implementation on embedded system devices. While there have been steady advances in the performance, memory, and power consumption of embedded devices, most machine learning algorithms still have a very high power consumption and computational demand, making the implementation of embedded machine learning somewhat difficult. However, different devices can be implemented for different applications based on their overall processing power and performance. This paper presents an overview of several different implementations of machine learning on embedded systems divided by their specific device, application, specific machine learning algorithm, and sensors. We will mainly focus on NVIDIA Jetson and Raspberry Pi devices with a few different less utilized embedded computers, as well as which of these devices were more commonly used for specific applications in different fields. We will also briefly analyze the specific ML models most commonly implemented on the devices and the specific sensors that were used to gather input from the field. All of the papers included in this review were selected using Google Scholar and published papers in the IEEExplore database. The selection criterion for these papers was the usage of embedded computing systems in either a theoretical study or practical implementation of machine learning models. The papers needed to have provided either one or, preferably, all of the following results in their studies—the overall accuracy of the models on the system, the overall power consumption of the embedded machine learning system, and the inference time of their models on the embedded system. Embedded machine learning is experiencing an explosion in both scale and scope, both due to advances in system performance and machine learning models, as well as greater affordability and accessibility of both. Improvements are noted in quality, power usage, and effectiveness. 
    more » « less
  2. Background

    We performed a systematic review that identified at least 9,000 scientific papers on PubMed that include immunofluorescent images of cells from the central nervous system (CNS). These CNS papers contain tens of thousands of immunofluorescent neural images supporting the findings of over 50,000 associated researchers. While many existing reviews discuss different aspects of immunofluorescent microscopy, such as image acquisition and staining protocols, few papers discuss immunofluorescent imaging from an image-processing perspective. We analyzed the literature to determine the image processing methods that were commonly published alongside the associated CNS cell, microscopy technique, and animal model, and highlight gaps in image processing documentation and reporting in the CNS research field.

    Methods

    We completed a comprehensive search of PubMed publications using Medical Subject Headings (MeSH) terms and other general search terms for CNS cells and common fluorescent microscopy techniques. Publications were found on PubMed using a combination of column description terms and row description terms. We manually tagged the comma-separated values file (CSV) metadata of each publication with the following categories: animal or cell model, quantified features, threshold techniques, segmentation techniques, and image processing software.

    Results

    Of the almost 9,000 immunofluorescent imaging papers identified in our search, only 856 explicitly include image processing information. Moreover, hundreds of the 856 papers are missing thresholding, segmentation, and morphological feature details necessary for explainable, unbiased, and reproducible results. In our assessment of the literature, we visualized current image processing practices, compiled the image processing options from the top twelve software programs, and designed a road map to enhance image processing. We determined that thresholding and segmentation methods were often left out of publications and underreported or underutilized for quantifying CNS cell research.

    Discussion

    Less than 10% of papers with immunofluorescent images include image processing in their methods. A few authors are implementing advanced methods in image analysis to quantify over 40 different CNS cell features, which can provide quantitative insights in CNS cell features that will advance CNS research. However, our review puts forward that image analysis methods will remain limited in rigor and reproducibility without more rigorous and detailed reporting of image processing methods.

    Conclusion

    Image processing is a critical part of CNS research that must be improved to increase scientific insight, explainability, reproducibility, and rigor.

     
    more » « less
  3. The Pearson correlation coefficient squared,r2, is an important tool used in the analysis of neural data to quantify the similarity between neural tuning curves. Yet this metric is biased by trial-to-trial variability; as trial-to-trial variability increases, measured correlation decreases. Major lines of research are confounded by this bias, including those involving the study of invariance of neural tuning across conditions and the analysis of the similarity of tuning across neurons. To address this, we extend an estimator,r̂ER2, that was recently developed for estimating model-to-neuron correlation, in which a noisy signal is compared with a noise-free prediction, to the case of neuron-to-neuron correlation, in which two noisy signals are compared with each other. We compare the performance of our novel estimator to a prior method developed by Spearman, commonly used in other fields but widely overlooked in neuroscience, and find that our method has less bias. We then apply our estimator to demonstrate how it avoids drastic confounds introduced by trial-to-trial variability using data collected in two prior studies (macaque, both sexes) that examined two different forms of invariance in the neural encoding of visual inputs—translation invariance and fill-outline invariance. Our results quantify for the first time the gradual falloff with spatial offset of translation-invariant shape selectivity within visual cortical neuronal receptive fields and offer a principled method to compare invariance in noisy biological systems to that in noise-free models.

    SIGNIFICANCE STATEMENTQuantifying the similarity between two sets of averaged neural responses is fundamental to the analysis of neural data. A ubiquitous metric of similarity, the correlation coefficient, is attenuated by trial-to-trial variability that arises from many irrelevant factors. Spearman recognized this problem and proposed corrected methods that have been extended over a century. We show this method has large asymptotic biases that can be overcome using a novel estimator. Despite the frequent use of the correlation coefficient in neuroscience, consensus on how to address this fundamental statistical issue has not been reached. We provide an accurate estimator of the correlation coefficient and apply it to gain insight into visual invariance.

     
    more » « less
  4. null (Ed.)
    Chemistry is considered as one of the more promising applications to science of near-term quantum computing. Recent work in transitioning classical algorithms to a quantum computer has led to great strides in improving quantum algorithms and illustrating their quantum advantage. Because of the limitations of near-term quantum computers, the most effective strategies split the work over classical and quantum computers. There is a proven set of methods in computational chemistry and materials physics that has used this same idea of splitting a complex physical system into parts that are treated at different levels of theory to obtain solutions for the complete physical system for which a brute force solution with a single method is not feasible. These methods are variously known as embedding, multi-scale, and fragment techniques and methods. We review these methods and then propose the embedding approach as a method for describing complex biochemical systems, with the parts not only treated with different levels of theory, but computed with hybrid classical and quantum algorithms. Such strategies are critical if one wants to expand the focus to biochemical molecules that contain active regions that cannot be properly explained with traditional algorithms on classical computers. While we do not solve this problem here, we provide an overview of where the field is going to enable such problems to be tackled in the future. 
    more » « less
  5. Activity-dependent neuronal plasticity is crucial for animals to adapt to dynamic sensory environments. Traditionally, it has been investigated using deprivation approaches in animal models primarily in sensory cortices. Nevertheless, emerging evidence emphasizes its significance in sensory organs and in sub-cortical regions where cranial nerves relay information to the brain. Additionally, critical questions started to arise. Do different sensory modalities share common cellular mechanisms for deprivation-induced plasticity at these central entry-points? Does the deprivation duration correlate with specific plasticity mechanisms?

    This study systematically reviews and meta-analyses research papers that investigated visual, auditory, or olfactory deprivation in rodents of both sexes. It examines the consequences of sensory deprivation in homologous regions at the first central synapse following cranial nerve transmission (vision-lateral geniculate nucleus and superior colliculus; audition-ventral and dorsal cochlear nucleus; olfaction-olfactory bulb). The systematic search yielded 91 papers (39 vision, 22 audition, 30 olfaction), revealing substantial heterogeneity in publication trends, experimental methods, measures of plasticity, and reporting across the sensory modalities. Despite these differences, commonalities emerged when correlating plasticity mechanisms with the duration of sensory deprivation. Short-term deprivation (up to 1 day) reduced activity and increased disinhibition, medium-term deprivation (1 day to a week) involved glial changes and synaptic remodelling, and long-term deprivation (over a week) primarily led to structural alterations.

    These findings underscore the importance of standardizing methodologies and reporting practices. Additionally, they highlight the value of cross-modals synthesis for understanding how the nervous system, including peripheral, pre-cortical, and cortical areas, respond to and compensate for sensory inputs loss.

    Significance StatementThis study addresses the critical issue of sensory loss and its impact on the brain's adaptability, shedding light on how different sensory systems respond to loss of inputs from the environment. While past research has primarily explored early-life sensory deprivation, this study focuses on the effects of sensory loss in post-weaning rodents. By systematically reviewing 91 research articles, the findings reveal distinct responses based on the duration of sensory deprivation. This research not only enhances our understanding of brain plasticity but also has broad implications for translational applications, particularly in cross-modal plasticity, offering valuable insights into neuroscientific research and potential clinical interventions.

     
    more » « less