skip to main content


Search for: All records

Award ID contains: 1740796

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Many of the civil structures experience significant vibrations and repeated stress cycles during their life span. These conditions are the bases for fatigue analysis to accurately establish the remaining fatigue life of the structures that ideally requires a full‐field strain assessment of the structures over years of data collection. Traditional inspection methods collect strain measurements by using strain gauges for a short time span and extrapolate the measurements in time; nevertheless, large‐scale deployment of strain gauges is expensive and laborious as more spatial information is desired. This paper introduces a deep learning‐based approach to replace this high cost by employing inexpensive data coming from acceleration sensors. The proposed approach utilizes collected acceleration responses as inputs to a multistage deep neural network based on long short‐term memory and fully connected layers to estimate the strain responses. The memory requirement of training long acceleration sequences is reduced by proposing a novel training strategy. In the evaluation of the method, a laboratory‐scale horizontally curved girder subjected to various loading scenarios is tested.

     
    more » « less
  2. Free, publicly-accessible full text available June 7, 2024
  3. Gradient sampling (GS) methods for the minimization of objective functions that may be nonconvex and/or nonsmooth are proposed, analyzed, and tested. One of the most computationally expensive components of contemporary GS methods is the need to solve a convex quadratic subproblem in each iteration. By contrast, the methods proposed in this paper allow the use of inexact solutions of these subproblems, which, as proved in the paper, can be incorporated without the loss of theoretical convergence guarantees. Numerical experiments show that, by exploiting inexact subproblem solutions, one can consistently reduce the computational effort required by a GS method. Additionally, a strategy is proposed for aggregating gradient information after a subproblem is solved (potentially inexactly) as has been exploited in bundle methods for nonsmooth optimization. It is proved that the aggregation scheme can be introduced without the loss of theoretical convergence guarantees. Numerical experiments show that incorporating this gradient aggregation approach can also reduce the computational effort required by a GS method. 
    more » « less
  4. Ranzato, M.: ; Dauphin, Y. ; Liang, P.S. ; Wortman Vaughan, J. (Ed.)
    We consider a line-search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles. These oracles capture multiple stan- dard settings including expected loss minimization and zeroth-order optimization. Moreover, our framework is very general and allows the function and gradient estimates to be biased. The proposed algorithm is simple to describe, easy to im- plement, and uses these oracles in a similar way as the standard deterministic line search uses exact function and gradient values. Under fairly general conditions on the oracles, we derive a high probability tail bound on the iteration complexity of the algorithm when applied to non-convex smooth functions. These results are stronger than those for other existing stochastic line search methods and apply in more general settings. 
    more » « less
  5. null (Ed.)
  6. null (Ed.)