skip to main content


Title: Theoretical vehicle bridge interaction model for bridges with non-simply supported boundary conditions
Theoretical vehicle bridge interaction (VBI) models have been widely studied for decades for the simply supported boundary condition but not for the other boundary conditions. This paper presents the mathematical models for several non-simply supported boundary conditions including both ends fixed, fixed simply supported, and one end fixed the other end free (cantilever) boundary condition. The closed-form solutions can be found under the assumption that the vehicle acceleration magnitude is far lower than the gravitational acceleration constant. The analytical solutions are then illustrated on a specific bridge example to compare the responses due to different bridge boundary conditions, and to study different vehicle parameter effects on extracting multiple bridge frequencies (five) from the vehicle responses. A signal drift phenomenon can be observed on the acceleration response of both the bridge and the vehicle, while a camel hump phenomenon can be observed on the Fast Fourier analysis of the vehicle acceleration signal. The parameter study shows that the vehicle frequency is preferred to be high due to the attenuation effect on the bridge frequencies that are higher than the vehicle frequency. The vehicle speed parameter is preferred to be low to reduce both the camel hump phenomenon and the vehicle acceleration magnitude, while both the vehicle mass and damping parameter have little effect on the multiple bridge frequencies extraction from the vehicle. Besides presenting the explicit solutions for calibrating other numerical models, this study also demonstrates the feasibility of the vehicle-based bridge health monitoring approach, as any bridge anomaly due to deterioration may be sensitively reflected on the bridge frequency list extracted from the vehicle response.  more » « less
Award ID(s):
1645863
NSF-PAR ID:
10229448
Author(s) / Creator(s):
Date Published:
Journal Name:
Engineering structures
Volume:
232
ISSN:
0141-0296
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The coupled differential equation group for the vehicle bridge interaction system is reestablished to include both the vehicle and bridge damping effects. The equation group can be uncoupled and closed-form solutions for both the bridge and vehicle can be obtained under the assumption that the vehicle acceleration magnitude is much lower than the gravitational acceleration constant. Then based on a simply supported boundary condition scenario, several critical parameters including bridge damping, vehicle frequency, vehicle speed, vehicle mass, and vehicle damping are studied to investigate their effects on extracting multiple bridge frequencies from the vehicle. The results show that the bridge damping plays a significant role in the vibration behaviour of both the vehicle and the bridge compared to the vehicle damping. The vehicle is preferred to be designed with a high frequency beyond the interested bridge frequencies to be extracted since low vehicle frequency tends to attenuate bridge frequencies that are higher than the vehicle frequency. A camel hump phenomenon can be observed on the extracted bridge frequencies from the vehicle, especially for scenarios that involve high bridge vibration mode and high vehicle speed. Vehicle speed is preferred to be maintained low to meet the theoretical assumption and to reduce the camel hump phenomenon. Although vehicle mass is not necessarily limited in this study, there is a magnitude balance among vehicle mass, vehicle speed, and damping to meet the theoretical assumption. This theoretical work may give some indications for designing a special field test vehicle to monitor bridge in a more comprehensive way. 
    more » « less
  2. null (Ed.)
    Abstract

    Many structures are subjected to varying forces, moving boundaries, and other dynamic conditions. Whether part of a vehicle, building, or active energy mitigation device, data on such changes can represent useful knowledge, but also presents challenges in its collection and analysis. In systems where changes occur rapidly, assessment of the system’s state within a useful time span is required to enable an appropriate response before the system’s state changes further. Rapid state estimation is especially important but poses unique difficulties.

    In determining the state of a structural system subjected to high-rate dynamic changes, measuring the frequency response is one method that can be used to draw inferences, provided the system is adequately understood and defined. The work presented here is the result of an investigation into methods to determine the frequency response, and thus state, of a structure subjected to high-rate boundary changes in real-time.

    In order to facilitate development, the Air Force Research Laboratory created the DROPBEAR, a testbed with an oscillating beam subjected to a continuously variable boundary condition. One end of the beam is held by a stationary fixed support, while a pinned support is able to move along the beam’s length. The free end of the beam structure is instrumented with acceleration, velocity, and position sensors measuring the beam’s vertical axis. Direct position measurement of the pin location is also taken to provide a reference for comparison with numerical models.

    This work presents a numerical investigation into methods for extracting the frequency response of a structure in real-time. An FFT based method with a rolling window is used to track the frequency of a data set generated to represent the range of the DROPBEAR, and is run with multiple window lengths. The frequency precision and latency of the FFT method is analyzed in each configuration. A specialized frequency extraction technique, Delayed Comparison Error Minimization, is implemented with parameters optimized for the frequency range of interest. The performance metrics of latency and precision are analyzed and compared to the baseline rolling FFT method results, and applicability is discussed.

     
    more » « less
  3. null (Ed.)
    Abstract We outline and interpret a recently developed theory of impedance matching or reflectionless excitation of arbitrary finite photonic structures in any dimension. The theory includes both the case of guided wave and free-space excitation. It describes the necessary and sufficient conditions for perfectly reflectionless excitation to be possible and specifies how many physical parameters must be tuned to achieve this. In the absence of geometric symmetries, such as parity and time-reversal, the product of parity and time-reversal, or rotational symmetry, the tuning of at least one structural parameter will be necessary to achieve reflectionless excitation. The theory employs a recently identified set of complex frequency solutions of the Maxwell equations as a starting point, which are defined by having zero reflection into a chosen set of input channels, and which are referred to as R-zeros. Tuning is generically necessary in order to move an R-zero to the real frequency axis, where it becomes a physical steady-state impedance-matched solution, which we refer to as a reflectionless scattering mode (RSM). In addition, except in single-channel systems, the RSM corresponds to a particular input wavefront, and any other wavefront will generally not be reflectionless. It is useful to consider the theory as representing a generalization of the concept of critical coupling of a resonator, but it holds in arbitrary dimension, for arbitrary number of channels, and even when resonances are not spectrally isolated. In a structure with parity and time-reversal symmetry (a real dielectric function) or with parity–time symmetry, generically a subset of the R-zeros has real frequencies, and reflectionless states exist at discrete frequencies without tuning. However, they do not exist within every spectral range, as they do in the special case of the Fabry–Pérot or two-mirror resonator, due to a spontaneous symmetry-breaking phenomenon when two RSMs meet. Such symmetry-breaking transitions correspond to a new kind of exceptional point, only recently identified, at which the shape of the reflection and transmission resonance lineshape is flattened. Numerical examples of RSMs are given for one-dimensional multimirror cavities, a two-dimensional multiwaveguide junction, and a multimode waveguide functioning as a perfect mode converter. Two solution methods to find R-zeros and RSMs are discussed. The first one is a straightforward generalization of the complex scaling or perfectly matched layer method and is applicable in a number of important cases; the second one involves a mode-specific boundary matching method that has only recently been demonstrated and can be applied to all geometries for which the theory is valid, including free space and multimode waveguide problems of the type solved here. 
    more » « less
  4. Recently, drive-by bridge inspection has attracted increasing attention in the bridge monitoring field. A number of studies have given confidence in the feasibility of the approach to detect, quantify, and localize damages. However, the speed of the inspection truck represents a major obstacle to the success of this method. High speeds are essential to induce a significant amount of kinetic energy to stimulate the bridge modes of vibration. On the other hand, low speeds are necessary to collect more data and to attenuate the vibration of the vehicle due to the roughness of the road and, hence, magnify the bridge influence on the vehicle responses. This article introduces Frequency Independent Underdamped Pinning Stochastic Resonance (FI-UPSR) as a new technique, which possesses the ability to extract bridge dynamic properties from the responses of a vehicle that passes over the bridge at high speed. Stochastic Resonance (SR) is a phenomenon where feeble information such as weak signals can be amplified through the assistance of background noise. In this study, bridge vibrations that are present in the vehicle responses when it passes over the bridge are the feeble information while the noise counts for the effect of the road roughness on the vehicle vibration. UPSR is one of the SR models that has been chosen in this study for its suitability to extract the bridge vibration. The main contributions of this article are: (1) introducing a Frequency Independent-Stochastic Resonance model known as the FI-UPSR and (2) implementing this model to extract the bridge vibration from the responses of a fast passing vehicle. 
    more » « less
  5. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less