This study investigates the emergence of hurricane‐like vortices in idealized simulations of rotating moist convection. A Boussinesq atmosphere with simplified thermodynamics for phase transitions is forced by prescribing the temperature and humidity at the upper and lower boundaries. The governing equations are solved numerically using a variable‐density incompressible Navier‐Stokes solver with adaptive mesh refinement to explore the behavior of moist convection under a broad range of conditions. In the absence of rotation, convection aggregates into active patches separated by large unsaturated regions. Rotation modulates this statistical equilibrium state so that the self‐aggregated convection organizes hurricane‐like vortices. The warm and saturated air converges to the center of the vortices, and the latent heat released through the upwelling, forms the warm core structure. These hurricane‐like vortices share characteristics similar to tropical cyclones in the earth's atmosphere. The hurricane‐like vortices occur under conditionally unstable conditions where the potential energy given at the boundaries is large enough, corresponding to a moderate rate of rotation. This regime shares many similar characteristics to the tropical atmosphere indicating that the formation of intense meso‐scale vortices is a general characteristic of rotating moist convection. The model used here does not include any interactions with radiation, wind‐evaporation feedback, or cloud microphysics, indicating that, while these processes may be relevant for tropical cyclogenesis in the Earth atmosphere, they are not its primary cause. Instead, our results confirm that the formation and maintenance of hurricane‐like vortices involve a combination of atmospheric dynamics under the presence of rotation and of phase transitions.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract -
Abstract Much of our conceptual understanding of midlatitude atmospheric motion comes from two-layer quasigeostrophic (QG) models. Traditionally, these QG models do not include moisture, which accounts for an estimated 30%–60% of the available energy of the atmosphere. The atmospheric moisture content is expected to increase under global warming, and therefore, a theory for how moisture modifies atmospheric dynamics is crucial. We use a two-layer moist QG model with convective adjustment as a basis for analyzing how latent heat release and large-scale moisture gradients impact the scalings of a midlatitude system at the synoptic scale. In this model, the degree of saturation can be tuned independently of other moist parameters by enforcing a high rate of evaporation from the surface. This allows for study of the effects of latent heat release at saturation, without the intrinsic nonlinearity of precipitation. At saturation, this system is equivalent to the dry QG model under a rescaling of both length and time. This predicts that the most unstable mode shifts to smaller scales, the growth rates increase, and the inverse cascade extends to larger scales. We verify these results numerically and use them to verify a framework for the complete energetics of a moist system. We examine the spectral features of the energy transfer terms. This analysis shows that precipitation generates energy at small scales, while dry dynamics drive a significant broadening to larger scales. Cascades of energy are still observed in all terms, albeit without a clearly defined inertial range. Significance Statement The effect of moist processes, especially the impact of latent heating associated with condensation, on the size and strength of midlatitude storms is not well understood. Such insight is particularly needed in the context of global warming, as we expect moisture to play a more important role in a warmer world. In this study, we provide intuition into how including condensation can result in midlatitude storms that grow faster and have features on both larger and smaller scales than their dry counterparts. We provide a framework for quantifying these changes and verify it for the special case where it is raining everywhere. These findings can be extended to the more realistic situation where it is only raining locally.more » « less
-
We present an overview of four challenging research areas in multiscale physics and engineering as well as four data science topics that may be developed for addressing these challenges. We focus on multiscale spatiotemporal problems in light of the importance of understanding the accompanying scientific processes and engineering ideas, where “multiscale” refers to concurrent, non-trivial and coupled models over scales separated by orders of magnitude in either space, time, energy, momenta, or any other relevant parameter. Specifically, we consider problems where the data may be obtained at various resolutions; analyzing such data and constructing coupled models led to open research questions in various applications of data science. Numeric studies are reported for one of the data science techniques discussed here for illustration, namely, on approximate Bayesian computations.more » « less
-
NA (Ed.)
Abstract. This study investigates the inability of two popular data splitting techniques: train/test split and k-fold cross-validation that are to create training and validation data sets, and to achieve sufficient generality for supervised deep learning (DL) methods. This failure is mainly caused by their limited ability of new data creation. In response, the bootstrap is a computer based statistical resampling method that has been used efficiently for estimating the distribution of a sample estimator and to assess a model without having knowledge about the population. This paper couples cross-validation and bootstrap to have their respective advantages in view of data generation strategy and to achieve better generalization of a DL model. This paper contributes by: (i) developing an algorithm for better selection of training and validation data sets, (ii) exploring the potential of bootstrap for drawing statistical inference on the necessary performance metrics (e.g., mean square error), and (iii) introducing a method that can assess and improve the efficiency of a DL model. The proposed method is applied for semantic segmentation and is demonstrated via a DL based classification algorithm, PointNet, through aerial laser scanning point cloud data.
-
A TWO-STEP FEATURE EXTRACTION ALGORITHM: APPLICATION TO DEEP LEARNING FOR POINT CLOUD CLASSIFICATION
Abstract. Most deep learning (DL) methods that are not end-to-end use several multi-scale and multi-type hand-crafted features that make the network challenging, more computationally intensive and vulnerable to overfitting. Furthermore, reliance on empirically-based feature dimensionality reduction may lead to misclassification. In contrast, efficient feature management can reduce storage and computational complexities, builds better classifiers, and improves overall performance. Principal Component Analysis (PCA) is a well-known dimension reduction technique that has been used for feature extraction. This paper presents a two-step PCA based feature extraction algorithm that employs a variant of feature-based PointNet (Qi et al., 2017a) for point cloud classification. This paper extends the PointNet framework for use on large-scale aerial LiDAR data, and contributes by (i) developing a new feature extraction algorithm, (ii) exploring the impact of dimensionality reduction in feature extraction, and (iii) introducing a non-end-to-end PointNet variant for per point classification in point clouds. This is demonstrated on aerial laser scanning (ALS) point clouds. The algorithm successfully reduces the dimension of the feature space without sacrificing performance, as benchmarked against the original PointNet algorithm. When tested on the well-known Vaihingen data set, the proposed algorithm achieves an Overall Accuracy (OA) of 74.64% by using 9 input vectors and 14 shape features, whereas with the same 9 input vectors and only 5PCs (principal components built by the 14 shape features) it actually achieves a higher OA of 75.36% which demonstrates the effect of efficient dimensionality reduction.
-
Abstract. Classifying objects within aerial Light Detection and Ranging (LiDAR) data is an essential task to which machine learning (ML) is applied increasingly. ML has been shown to be more effective on LiDAR than imagery for classification, but most efforts have focused on imagery because of the challenges presented by LiDAR data. LiDAR datasets are of higher dimensionality, discontinuous, heterogenous, spatially incomplete, and often scarce. As such, there has been little examination into the fundamental properties of the training data required for acceptable performance of classification models tailored for LiDAR data. The quantity of training data is one such crucial property, because training on different sizes of data provides insight into a model’s performance with differing data sets. This paper assesses the impact of training data size on the accuracy of PointNet, a widely used ML approach for point cloud classification. Subsets of ModelNet ranging from 40 to 9,843 objects were validated on a test set of 400 objects. Accuracy improved logarithmically; decelerating from 45 objects onwards, it slowed significantly at a training size of 2,000 objects, corresponding to 20,000,000 points. This work contributes to the theoretical foundation for development of LiDAR-focused models by establishing a learning curve, suggesting the minimum quantity of manually labelled data necessary for satisfactory classification performance and providing a path for further analysis of the effects of modifying training data characteristics.