skip to main content

Title: Fast Light-Weight Near-Field Photometric Stereo
We introduce the first end-to-end learning-based solution to near-field Photometric Stereo (PS), where the light sources are close to the object of interest. This setup is especially useful for reconstructing large immobile objects. Our method is fast, producing a mesh from 52 512x384 resolution images in about 1 second on a commodity GPU, thus potentially unlocking several AR/VR applications. Existing approaches rely on optimization coupled with a far-field PS network operating on pixels or small patches. Using optimization makes these approaches slow and memory intensive (requiring 17GB GPU and 27GB of CPU memory) while using only pixels or patches makes them highly susceptible to noise and calibration errors. To address these issues, we develop a recursive multi-resolution scheme to estimate surface normal and depth maps of the whole image at each step. The predicted depth map at each scale is then used to estimate 'per-pixel lighting' for the next scale. This design makes our approach almost 45x faster and 2 degrees more accurate (11.3 vs. 13.3 degrees Mean Angular Error) than the state-of-the-art near-field PS reconstruction technique, which uses iterative optimization.  more » « less
Award ID(s):
Author(s) / Creator(s):
Date Published:
Journal Name:
IEEE Conference on Computer Vision and Pattern Recognition
Page Range / eLocation ID:
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Site description. This data package consists of data obtained from sampling surface soil (the 0-7.6 cm depth profile) in black mangrove (Avicennia germinans) dominated forest and black needlerush (Juncus roemerianus) saltmarsh along the Gulf of Mexico coastline in peninsular west-central Florida, USA. This location has a subtropical climate with mean daily temperatures ranging from 15.4 °C in January to 27.8 °C in August, and annual precipitation of 1336 mm. Precipitation falls as rain primarily between June and September. Tides are semi-diurnal, with 0.57 m median amplitudes during the year preceding sampling (U.S. NOAA National Ocean Service, Clearwater Beach, Florida, station 8726724). Sea-level rise is 4.0 ± 0.6 mm per year (1973-2020 trend, mean ± 95 % confidence interval, NOAA NOS Clearwater Beach station). The A. germinans mangrove zone is either adjacent to water or fringed on the seaward side by a narrow band of red mangrove (Rhizophora mangle). A near-monoculture of J. roemerianus is often adjacent to and immediately landward of the A. germinans zone. The transition from the mangrove to the J. roemerianus zone is variable in our study area. An abrupt edge between closed-canopy mangrove and J. roemerianus monoculture may extend for up to several hundred meters in some locations, while other stretches of ecotone present a gradual transition where smaller, widely spaced trees are interspersed into the herbaceous marsh. Juncus roemerianus then extends landward to a high marsh patchwork of succulent halophytes (including Salicornia bigellovi, Sesuvium sp., and Batis maritima), scattered dwarf mangrove, and salt pans, followed in turn by upland vegetation that includes Pinus sp. and Serenoa repens. Field design and sample collection. We established three study sites spaced at approximately 5 km intervals along the western coastline of the central Florida peninsula. The sites consisted of the Salt Springs (28.3298°, -82.7274°), Energy Marine Center (28.2903°, -82.7278°), and Green Key (28.2530°, -82.7496°) sites on the Gulf of Mexico coastline in Pasco County, Florida, USA. At each site, we established three plot pairs, each consisting of one saltmarsh plot and one mangrove plot. Plots were 50 m^2 in size. Plots pairs within a site were separated by 230-1070 m, and the mangrove and saltmarsh plots composing a pair were 70-170 m apart. All plot pairs consisted of directly adjacent patches of mangrove forest and J. roemerianus saltmarsh, with the mangrove forests exhibiting a closed canopy and a tree architecture (height 4-6 m, crown width 1.5-3 m). Mangrove plots were located at approximately the midpoint between the seaward edge (water-mangrove interface) and landward edge (mangrove-marsh interface) of the mangrove zone. Saltmarsh plots were located 20-25 m away from any mangrove trees and into the J. roemerianus zone (i.e., landward from the mangrove-marsh interface). Plot pairs were coarsely similar in geomorphic setting, as all were located on the Gulf of Mexico coastline, rather than within major sheltering formations like Tampa Bay, and all plot pairs fit the tide-dominated domain of the Woodroffe classification (Woodroffe, 2002, "Coasts: Form, Process and Evolution", Cambridge University Press), given their conspicuous semi-diurnal tides. There was nevertheless some geomorphic variation, as some plot pairs were directly open to the Gulf of Mexico while others sat behind keys and spits or along small tidal creeks. Our use of a plot-pair approach is intended to control for this geomorphic variation. Plot center elevations (cm above mean sea level, NAVD 88) were estimated by overlaying the plot locations determined with a global positioning system (Garmin GPS 60, Olathe, KS, USA) on a LiDAR-derived bare-earth digital elevation model (Dewberry, Inc., 2019). The digital elevation model had a vertical accuracy of ± 10 cm (95 % CI) and a horizontal accuracy of ± 116 cm (95 % CI). Soil samples were collected via coring at low tide in June 2011. From each plot, we collected a composite soil sample consisting of three discrete 5.1 cm diameter soil cores taken at equidistant points to 7.6 cm depth. Cores were taken by tapping a sleeve into the soil until its top was flush with the soil surface, sliding a hand under the core, and lifting it up. Cores were then capped and transferred on ice to our laboratory at the University of South Florida (Tampa, Florida, USA), where they were combined in plastic zipper bags, and homogenized by hand into plot-level composite samples on the day they were collected. A damp soil subsample was immediately taken from each composite sample to initiate 1 y incubations for determination of active C and N (see below). The remainder of each composite sample was then placed in a drying oven (60 °C) for 1 week with frequent mixing of the soil to prevent aggregation and liberate water. Organic wetland soils are sometimes dried at 70 °C, however high drying temperatures can volatilize non-water liquids and oxidize and decompose organic matter, so 50 °C is also a common drying temperature for organic soils (Gardner 1986, "Methods of Soil Analysis: Part 1", Soil Science Society of America); we accordingly chose 60 °C as a compromise between sufficient water removal and avoidance of non-water mass loss. Bulk density was determined as soil dry mass per core volume (adding back the dry mass equivalent of the damp subsample removed prior to drying). Dried subsamples were obtained for determination of soil organic matter (SOM), mineral texture composition, and extractable and total carbon (C) and nitrogen (N) within the following week. Sample analyses. A dried subsample was apportioned from each composite sample to determine SOM as mass loss on ignition at 550 °C for 4 h. After organic matter was removed from soil via ignition, mineral particle size composition was determined using a combination of wet sieving and density separation in 49 mM (3 %) sodium hexametaphosphate ((NaPO_3)_6) following procedures in Kettler et al. (2001, Soil Science Society of America Journal 65, 849-852). The percentage of dry soil mass composed of silt and clay particles (hereafter, fines) was calculated as the mass lost from dispersed mineral soil after sieving (0.053 mm mesh sieve). Fines could have been slightly underestimated if any clay particles were burned off during the preceding ignition of soil. An additional subsample was taken from each composite sample to determine extractable N and organic C concentrations via 0.5 M potassium sulfate (K_2SO_4) extractions. We combined soil and extractant (ratio of 1 g dry soil:5 mL extractant) in plastic bottles, reciprocally shook the slurry for 1 h at 120 rpm, and then gravity filtered it through Fisher G6 (1.6 μm pore size) glass fiber filters, followed by colorimetric detection of nitrite (NO_2^-) + nitrate (NO_3^-) and ammonium (NH_4^+) in the filtrate (Hood Nowotny et al., 2010,Soil Science Society of America Journal 74, 1018-1027) using a microplate spectrophotometer (Biotek Epoch, Winooski, VT, USA). Filtrate was also analyzed for dissolved organic C (referred to hereafter as extractable organic C) and total dissolved N via combustion and oxidation followed by detection of the evolved CO_2 and N oxide gases on a Formacs HT TOC/TN analyzer (Skalar, Breda, The Netherlands). Extractable organic N was then computed as total dissolved N in filtrate minus extractable mineral N (itself the sum of extractable NH_4-N and NO_2-N + NO_3-N). We determined soil total C and N from dried, milled subsamples subjected to elemental analysis (ECS 4010, Costech, Inc., Valencia, CA, USA) at the University of South Florida Stable Isotope Laboratory. Median concentration of inorganic C in unvegetated surface soil at our sites is 0.5 % of soil mass (Anderson, 2019, Univ. of South Florida M.S. thesis via methods in Wang et al., 2011, Environmental Monitoring and Assessment 174, 241-257). Inorganic C concentrations are likely even lower in our samples from under vegetation, where organic matter would dilute the contribution of inorganic C to soil mass. Nevertheless, the presence of a small inorganic C pool in our soils may be counted in the total C values we report. Extractable organic C is necessarily of organic C origin given the method (sparging with HCl) used in detection. Active C and N represent the fractions of organic C and N that are mineralizable by soil microorganisms under aerobic conditions in long-term soil incubations. To quantify active C and N, 60 g of field-moist soil were apportioned from each composite sample, placed in a filtration apparatus, and incubated in the dark at 25 °C and field capacity moisture for 365 d (as in Lewis et al., 2014, Ecosphere 5, art59). Moisture levels were maintained by frequently weighing incubated soil and wetting them up to target mass. Daily CO_2 flux was quantified on 29 occasions at 0.5-3 week intervals during the incubation period (with shorter intervals earlier in the incubation), and these per day flux rates were integrated over the 365 d period to compute an estimate of active C. Observations of per day flux were made by sealing samples overnight in airtight chambers fitted with septa and quantifying headspace CO_2 accumulation by injecting headspace samples (obtained through the septa via needle and syringe) into an infrared gas analyzer (PP Systems EGM 4, Amesbury, MA, USA). To estimate active N, each incubated sample was leached with a C and N free, 35 psu solution containing micronutrients (Nadelhoffer, 1990, Soil Science Society of America Journal 54, 411-415) on 19 occasions at increasing 1-6 week intervals during the 365 d incubation, and then extracted in 0.5 M K_2SO_4 at the end of the incubation in order to remove any residual mineral N. Active N was then quantified as the total mass of mineral N leached and extracted. Mineral N in leached and extracted solutions was detected as NH_4-N and NO_2-N + NO_3-N via colorimetry as above. This incubation technique precludes new C and N inputs and persistently leaches mineral N, forcing microorganisms to meet demand by mineralizing existing pools, and thereby directly assays the potential activity of soil organic C and N pools present at the time of soil sampling. Because this analysis commences with disrupting soil physical structure, it is biased toward higher estimates of active fractions. Calculations. Non-mobile C and N fractions were computed as total C and N concentrations minus the extractable and active fractions of each element. This data package reports surface-soil constituents (moisture, fines, SOM, and C and N pools and fractions) in both gravimetric units (mass constituent / mass soil) and areal units (mass constituent / soil surface area integrated through 7.6 cm soil depth, the depth of sampling). Areal concentrations were computed as X × D × 7.6, where X is the gravimetric concentration of a soil constituent, D is soil bulk density (g dry soil / cm^3), and 7.6 is the sampling depth in cm. 
    more » « less
  2. Deep learning has made great strides in medical imaging, enabled by hardware advances in GPUs. One major constraint for the development of new models has been the saturation of GPU memory resources during training. This is especially true in computational pathology, where images regularly contain more than 1 billion pixels. These pathological images are traditionally divided into small patches to enable deep learning due to hardware limitations. In this work, we explore whether the shared GPU/CPU memory architecture on the M1 Ultra systems-on-a-chip (SoCs) recently released by Apple, Inc. may provide a solution. These affordable systems (less than $5000) provide access to 128 GB of unified memory (Mac Studio with M1 Ultra SoC). As a proof of concept for gigapixel deep learning, we identified tissue from background on gigapixel areas from whole slide images (WSIs). The model was a modified U-Net (4492 parameters) leveraging large kernels and high stride. The M1 Ultra SoC was able to train the model directly on gigapixel images (16000×64000 pixels, 1.024 billion pixels) with a batch size of 1 using over 100 GB of unified memory for the process at an average speed of 1 minute and 21 seconds per batch with Tensorflow 2/Keras. As expected, the model converged with a high Dice score of 0.989 ± 0.005. Training up until this point took 111 hours and 24 minutes over 4940 steps. Other high RAM GPUs like the NVIDIA A100 (largest commercially accessible at 80 GB, ∼$15000) are not yet widely available (in preview for select regions on Amazon Web Services at $40.96/hour as a group of 8). This study is a promising step towards WSI-wise end-to-end deep learning with prevalent network architectures. 
    more » « less
  3. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less
  4. Today’s large-scale scientific applications running on high-performance computing (HPC) systems generate vast data volumes. Thus, data compression is becoming a critical technique to mitigate the storage burden and data-movement cost. However, existing lossy compressors for scientific data cannot achieve a high compression ratio and throughput simultaneously, hindering their adoption in many applications requiring fast compression, such as in-memory compression. To this end, in this work, we develop a fast and high-ratio error-bounded lossy compressor on GPUs for scientific data (called FZ-GPU). Specifically, we first design a new compression pipeline that consists of fully parallelized quantization, bitshuffle, and our newly designed fast encoding. Then, we propose a series of deep architectural optimizations for each kernel in the pipeline to take full advantage of CUDA architectures. We propose a warp-level optimization to avoid data conflicts for bit-wise operations in bitshuffle, maximize shared memory utilization, and eliminate unnecessary data movements by fusing different compression kernels. Finally, we evaluate FZ-GPU on two NVIDIA GPUs (i.e., A100 and RTX A4000) using six representative scientific datasets from SDRBench. Results on the A100 GPU show that FZ-GPU achieves an average speedup of 4.2× over cuSZ and an average speedup of 37.0× over a multi-threaded CPU implementation of our algorithm under the same error bound. FZ-GPU also achieves an average speedup of 2.3× and an average compression ratio improvement of 2.0× over cuZFP under the same data distortion. 
    more » « less
  5. Current deep neural network approaches for camera pose estimation rely on scene structure for 3D motion estimation, but this decreases the robustness and thereby makes cross-dataset generalization difficult. In contrast, classical approaches to structure from motion estimate 3D motion utilizing optical flow and then compute depth. Their accuracy, however, depends strongly on the quality of the optical flow. To avoid this issue, direct methods have been proposed, which separate 3D motion from depth estimation, but compute 3D motion using only image gradients in the form of normal flow. In this paper, we introduce a network NFlowNet, for normal flow estimation which is used to enforce robust and direct constraints. In particular, normal flow is used to estimate relative camera pose based on the cheirality (depth positivity) constraint. We achieve this by formulating the optimization problem as a differentiable cheirality layer, which allows for end-to-end learning of camera pose. We perform extensive qualitative and quantitative evaluation of the proposed DiffPoseNet’s sensitivity to noise and its generalization across datasets. We compare our approach to existing state-of-the-art methods on KITTI, TartanAir, and TUM-RGBD datasets. 
    more » « less