Abstract We describe the survey design, calibration, commissioning, and emission-line detection algorithms for the Hobby–Eberly Telescope Dark Energy Experiment (HETDEX). The goal of HETDEX is to measure the redshifts of over a million Lyαemitting galaxies between 1.88 <z< 3.52, in a 540 deg2area encompassing a comoving volume of 10.9 Gpc3. No preselection of targets is involved; instead the HETDEX measurements are accomplished via a spectroscopic survey using a suite of wide-field integral field units distributed over the focal plane of the telescope. This survey measures the Hubble expansion parameter and angular diameter distance, with a final expected accuracy of better than 1%. We detail the project’s observational strategy, reduction pipeline, source detection, and catalog generation, and present initial results for science verification in the Cosmological Evolution Survey, Extended Groth Strip, and Great Observatories Origins Deep Survey North fields. We demonstrate that our data reach the required specifications in throughput, astrometric accuracy, flux limit, and object detection, with the end products being a catalog of emission-line sources, their object classifications, and flux-calibrated spectra.
more »
« less
Sensitivity Analysis of a Calibrated Data Center Model to Minimize the Site Survey Effort
To reproduce a Digital Twin (DT) of a data center (DC), input data is required which is collected through site surveys. Data collection is an important step since accurate representation of a DC depends on capturing the necessary detail for various model fidelity levels of each DC component. However, guidance is lacking in this regard as to which components within the DC are crucial to achieve the level of accuracy desired for the computational model. And determining the input values of the component object parameters is an exercise in engineering judgement during site survey. Sensitivity analysis can be an effective methodology to determine how the level of simplification in component models can affect the model accuracy.In this study, a calibrated raised-floor DC model is used to study the sensitivity of a DC component's representation to the DC model accuracy. Commercial CFD tool, 6SigmaDC Room is used for modeling and simulation. A total of 8 DC components are considered and eventually ranked on the basis of time and effort required to collect model input data. For parametrized component object, the object's full range of input parameter values are considered, and simulations run. The results are compared with the baseline calibrated model to understand the trade-off between survey effort/cost and model accuracy. For the calibrated DC model and of the 8 components considered, it was observed that the chilled water piping branches, data cables and the cable penetration seal (found within cabinets) have considerable influence on the tile flow rate prediction accuracy.
more »
« less
- Award ID(s):
- 1738811
- PAR ID:
- 10276499
- Date Published:
- Journal Name:
- 2021 37th Semiconductor Thermal Measurement, Modeling & Management Symposium (SEMI-THERM)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Paszynski, M.; Kranzlmüller, D.; Krzhizhanovskaya, V.V.; Dongarra, J.J.; Sloot, P.M. (Ed.)Global sensitivity analysis (GSA) is a method to quantify the effect of the input parameters on outputs of physics-based systems. Performing GSA can be challenging due to the combined effect of the high computational cost of each individual physics-based model, a large number of input parameters, and the need to perform repetitive model evaluations. To reduce this cost, neural networks (NNs) are used to replace the expensive physics-based model in this work. This introduces the additional challenge of finding the minimum number of training data samples required to train the NNs accurately. In this work, a new method is introduced to accurately quantify the GSA values by iterating over both the number of samples required to train the NNs, terminated using an outer-loop sensitivity convergence criteria, and the number of model responses required to calculate the GSA, terminated with an inner-loop sensitivity convergence criteria. The iterative surrogate-based GSA guarantees converged values for the Sobol’ indices and, at the same time, alleviates the specification of arbitrary accuracy metrics for the surrogate model. The proposed method is demonstrated in two cases, namely, an eight-variable borehole function and a three-variable nondestructive testing (NDT) case. For the borehole function, both the first- and total-order Sobol’ indices required 200 and 105 data points to terminate on the outer- and inner-loop sensitivity convergence criteria, respectively. For the NDT case, these values were 100 for both first- and total-order indices for the outer-loop sensitivity convergence, and 106 and 103 for the inner-loop sensitivity convergence, respectively, for the first- and total-order indices, on the inner-loop sensitivity convergence. The differences of the proposed method with GSA on the true functions are less than 3% in the analytical case and less than 10% in the physics-based case (where the large error comes from small Sobol’ indices).more » « less
-
The experimental results of LEAP (Liquefaction Experiments and Analysis Projects) centrifuge test replicas of a saturated sloping deposit are used to assess the sensitivity of soil accelerations to variability in input motion and soil deposition. A difference metric is used to quantify the dissimilarities between recorded acceleration time histories. This metric is uniquely decomposed in terms of four difference component measures associated with phase, frequency shift, amplitude at 1 Hz, and amplitude of frequency components higher than 2 Hz (2 + Hz). The sensitivity of the deposit response accelerations to differences in input motion amplitude at 1 Hz and 2 + Hz and cone penetration resistance (used as a measure reflecting soil deposition and initial grain packing condition) was obtained using a Gaussian process-based kriging. These accelerations were found to be more sensitive to variations in cone penetration resistance values than to the amplitude of the input motion 1 Hz and 2 + Hz (frequency) components. The sensitivity functions associated with this resistance parameter were found to be substantially nonlinear.more » « less
-
Traditionally, web applications have been written as HTML pages with embedded JavaScript code that implements dynamic and interactive features by manipulating the Document Object Model (DOM) through a low-level browser API. However, this unprincipled approach leads to code that is brittle, difficult to understand, non-modular, and does not facilitate incremental update of user-interfaces in response to state changes. React is a popular framework for constructing web applications that aims to overcome these problems. React applications are written in a declarative and object-oriented style, and consist of components that are organized in a tree structure. Each component has a set of properties representing input parameters, a state consisting of values that may vary over time, and a render method that declaratively specifies the subcomponents of the component. React’s concept of reconciliation determines the impact of state changes and updates the user-interface incrementally by selective mounting and unmounting of subcomponents. At designated points, the React framework invokes lifecycle hooks that enable programmers to perform actions outside the framework such as acquiring and releasing resources needed by a component. These mechanisms exhibit considerable complexity, but, to our knowledge, no formal specification of React’s semantics exists. This paper presents a small-step operational semantics that captures the essence of React, as a first step towards a long-term goal of developing automatic tools for program understanding, automatic testing, and bug finding for React web applications. To demonstrate that key operations such as mounting, unmounting, and reconciliation terminate, we define the notion of a well-behaved component and prove that well-behavedness is preserved by these operations.more » « less
-
3D object recognition accuracy can be improved by learning the multi-scale spatial features from 3D spatial geometric representations of objects such as point clouds, 3D models, surfaces, and RGB-D data. Current deep learning approaches learn such features either using structured data representations (voxel grids and octrees) or from unstructured representations (graphs and point clouds). Learning features from such structured representations is limited by the restriction on resolution and tree depth while unstructured representations creates a challenge due to non-uniformity among data samples. In this paper, we propose an end-to-end multi-level learning approach on a multi-level voxel grid to overcome these drawbacks. To demonstrate the utility of the proposed multi-level learning, we use a multi-level voxel representation of 3D objects to perform object recognition. The multi-level voxel representation consists of a coarse voxel grid that contains volumetric information of the 3D object. In addition, each voxel in the coarse grid that contains a portion of the object boundary is subdivided into multiple fine-level voxel grids. The performance of our multi-level learning algorithm for object recognition is comparable to dense voxel representations while using significantly lower memory.more » « less
An official website of the United States government

