skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An objective approach to the quantification of strain in three-dimensions with consideration of error assessment
The quantification of strain in three-dimensions is a powerful tool for structural investigations, allowing for the direct consideration of the localization and delocalization of deformation in space, and potentially, in time. Furthermore, characterization of the distribution of strain in three-dimensions may yield information concerning large-scale kinematics that may not be obtained through the traditional use of asymmetric fabrics. In this contribution, we present a streamlined methodology for the calculation of three-dimensional strain using objective approaches that allow for consideration of error assessment. This approach begins with the collection of suitable samples for strain analysis following either the Rf/ϕ or normalized Fry techniques. Samples are cut along three mutually perpendicular orientations using a set of jigs designed for use in a large oil saw. Cut faces are polished and scanned in high resolution. Scanned images are processed following a standard convention. The boundaries of objects are outlined as “Regions Of Interest” in the open-source program ImageJ and saved. A script reads the saved files of object outlines and statistically fits an ellipse to each digitized object. The parameters of fitted objects are then extracted and saved. Two-dimensional strain analyses are completed following the normalized Fry method or the Rf/ϕ technique following a bootstrap statistical approach. For the normalized Fry method, an objective fitting routine modified from Mulchrone (2013) is used to determine the parameters of the central void. For the Rf/ϕ method, an inverse straining routine is applied and tests the resulting object orientations to a random uniform distribution following a Kolmogorov–Smirnov test in order to obtain the sectional strain ratio and orientation. Bootstrap sampling of Fry coordinates or objects results in a distribution of possible sectional strains that can be sampled for fitting of strain ellipsoids following the technique of Robin (2002). As such, the parameters of three-dimensional strain including Lode parameter and octahedral shear strain can be contoured based on confidence intervals for each sample processed. The application of the objective approach is presented in a corresponding poster.  more » « less
Award ID(s):
1650183
PAR ID:
10055516
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Abstracts with programs - Geological Society of America
Volume:
50
Issue:
2
ISSN:
0016-7592
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model methods by a large margin and ranks first among all Lidar-only submissions. 
    more » « less
  2. Object signatures have been widely used in object detection and classification. Following a similar idea, the authors developed geometric signatures for architecture, engineering, and construction (AEC) objects such as footings, slabs, walls, beams, and columns. The signatures were developed both scientifically and empirically, by following a data-driven approach based on analysis of collected building information modeling (BIM) data using geometric theories. Rigorous geometric properties and statistical information were included in the developed geometric signatures. To enable an open access to BIM data using these signatures, the authors also initiated a BIM data repository with a preliminary collection of AEC objects and their geometric signatures. The developed geometric signatures were preliminarily tested by a small object classification experiment where 389 object instances from an architectural model were used. A rule-based algorithm developed using all parameter values of 14 features from the geometric signatures of the objects successfully classified 336 object instances into the correct categories of beams, columns, slabs, and walls. This higher than 85% accuracy showed the developed geometric signatures are promising. The collected and processed data were deposited into the Purdue University Research Repository (PURR) for sharing. 
    more » « less
  3. Recognizing and generating object-state compositions has been a challenging task, especially when generalizing to unseen compositions. In this paper, we study the task of cutting objects in different styles and the resulting object state changes. We propose a new benchmark suite Chop & Learn, to accommodate the needs of learning objects and different cut styles using multiple viewpoints. We also propose a new task of Compositional Image Generation, which can transfer learned cut styles to different objects, by generating novel object-state images. Moreover, we also use the videos for Compositional Action Recognition, and show valuable uses of this dataset for multiple video tasks. Project website: https://chopnlearn.github.io. 
    more » « less
  4. Abstract This paper develops uniqueness theory for 3D phase retrieval with finite, discrete measurement data for strong phase objects and weak phase objects, including: (i)Unique determination of (phase) projections from diffraction patterns—General measurement schemes with coded and uncoded apertures are proposed and shown to ensure unique reduction of diffraction patterns to the phase projection for a strong phase object (respectively, the projection for a weak phase object) in each direction separately without the knowledge of relative orientations and locations. (ii)Uniqueness for 3D phase unwrapping—General conditions for unique determination of a 3D strong phase object from its phase projection data are established, including, but not limited to, random tilt schemes densely sampled from a spherical triangle of vertexes in three orthogonal directions and other deterministic tilt schemes. (iii)Uniqueness for projection tomography—Unique 
    more » « less
  5. Objects undergo varying amounts of perspective distortion as they move across a camera's field of view. Models for predicting 3D from a single image often work with crops around the object of interest and ignore the location of the object in the camera's field of view. We note that ignoring this location information further exaggerates the inherent ambiguity in making 3D inferences from 2D images and can prevent models from even fitting to the training data. To mitigate this ambiguity, we propose Intrinsics-Aware Positional Encoding (KPE), which incorporates information about the location of crops in the image and camera intrinsics. Experiments on three popular 3D-from-a-single-image benchmarks: depth prediction on NYU, 3D object detection on KITTI & nuScenes, and predicting 3D shapes of articulated objects on ARCTIC, show the benefits of KPE. 
    more » « less