skip to main content


Title: Behavior Associations in Lone Actor Terrorists
Terrorist attacks carried out by individuals or single cells have significantly accelerated over the last 20 years. This type of terrorism, defined as lone-actor (LA) terrorism, stands as one of the greatest security threats of our time. Research on LA behavior and characteristics has emerged and accelerated over the last decade. While these studies have produced valuable information on demographics, behavior, classifications, and warning signs, the relationship among these characters are yet to be addressed. Moreover, the means of radicalization and attacking have changed over decades. This study first identifies 25 binary behavioral characteristics of LAs and analyzes 192 LAs recorded on three different databases. Next, the classification is carried out according to first ideology, then to incident scene behavior via a virtual attacker-defender game, and, finally, according to the clusters obtained from the data. In addition, within each class, statistically significant associations and temporal relations are extracted using the A-priori algorithm. These associations would be instrumental in identifying the attacker type and intervene at the right time. The results indicate that while pre-9/11 LAs were mostly radicalized by the people in their environment, post-9/11 LAs are more diverse. Furthermore, the association chains for different LA types present unique characteristic pathways to violence and after-attack behavior.  more » « less
Award ID(s):
1901721
NSF-PAR ID:
10291807
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Terrorism and Political Violence
ISSN:
0954-6553
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Puyehue-Cordon Caulle (PCC) volcanic complex, Chile, hosts numerous thermal features, including a ~0.8 km3 laccolith formed during the 2011-2012 eruption. Laccoliths are large intrusions that form between country rock layers that have been rarely observed during the process of formation. We use medium-spatial resolution (90 m/pixel) satellite data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), to identify changes at the laccolith and other thermal features within PCC between 2000 and 2022. Previous studies have analyzed thermal behavior using MODIS images, which have low spatial resolution but high temporal resolution, and Landsat images, which have medium spatial resolution, but were only examined during the eruption (2011-2012). Prior research using ASTER data have only recorded the maximum temperature at PCC, while this study analyzes all of the individual thermal features and records both temperature and area for each feature identified in all 41 cloud-free, nighttime ASTER images available over the last 22 years. We focus on changes to seven features observed by satellite with temperatures at least 2 K above background (Trahuilco, Las Sopas, Los Venados, Los Baños/El Azufral, Puyehue, Laccolith, and a new unnamned feature). We create time series for each feature in order to: (1) evaluate temporal changes in area and temperature, (2) detect significant deviations from standard seasonality in non-eruptive periods, and (3) test for statistically significant precursors to the 2011 eruption. We identify both seasonal temperature variation and a general subtle increase in temperature over time at the laccolith. Furthermore, we find growth in the area of the laccolith with temperatures above background since 2016 including two periods of sudden increase in area between 11/2017 - 9/2018 and in mid 2020. We compare the ASTER observations with higher spatial resolution observations of fissures, craters, and fumaroles identified from field observations, drone thermal and optical imagery and high spatial resolution (~1 m/pixel) satellite SAR and optical data. We interpret the thermal changes at the laccolith to be related to fractures and craters in the laccolith exposing hot regions. 
    more » « less
  2. Context. Fast radio bursts (FRBs) are extremely energetic pulses of millisecond duration and unknown origin. To understand the phenomenon that emits these pulses, targeted and un-targeted searches have been performed for multiwavelength counterparts, including the optical. Aims. The objective of this work is to search for optical transients at the positions of eight well-localized (< 1″) FRBs after the arrival of the burst on different timescales (typically at one day, several months, and one year after FRB detection). We then compare this with known optical light curves to constrain progenitor models. Methods. We used the Las Cumbres Observatory Global Telescope (LCOGT) network to promptly take images with its network of 23 telescopes working around the world. We used a template subtraction technique to analyze all the images collected at differing epochs. We have divided the difference images into two groups: In one group we use the image of the last epoch as a template, and in the other group we use the image of the first epoch as a template. We then searched for optical transients at the localizations of the FRBs in the template subtracted images. Results. We have found no optical transients and have therefore set limiting magnitudes to the optical counterparts. Typical limits in apparent and absolute magnitudes for our LCOGT data are ∼22 and −19 mag in the r band, respectively. We have compared our limiting magnitudes with light curves of super-luminous supernovae (SLSNe), Type Ia supernovae (SNe Ia), supernovae associated with gamma-ray bursts (GRB-SNe), a kilonova, and tidal disruption events (TDEs). Conclusions. Assuming that the FRB emission coincides with the time of explosion of these transients, we rule out associations with SLSNe (at the ∼99.9% confidence level) and the brightest subtypes of SNe Ia, GRB-SNe, and TDEs (at a similar confidence level). However, we cannot exclude scenarios where FRBs are directly associated with the faintest of these subtypes or with kilonovae. 
    more » « less
  3. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less
  4. Abstract

    The absolute motion of tectonic plates since Pangea can be derived from observations of hotspot trails, paleomagnetism, or seismic tomography. However, fitting observations is typically carried out in isolation without consideration for the fit to unused data or whether the resulting plate motions are geodynamically plausible. Through the joint evaluation of global hotspot track observations (for times <80 Ma), first‐order estimates of net lithospheric rotation (NLR), and parameter estimation for paleo–trench migration (TM), we present a suite of geodynamically consistent, data‐optimized global absolute reference frames from 220 Ma to the present. Each absolute plate motion (APM) model was evaluated against six published APM models, together incorporating the full range of primary data constraints. Model performance for published and new models was quantified through a standard statistical analyses using three key diagnostic global metrics: root‐mean square plate velocities, NLR characteristics, and TM behavior. Additionally, models were assessed for consistency with published global paleomagnetic data and for ages <80 Ma for predicted relative hotspot motion, track geometry, and time dependence. Optimized APM models demonstrated significantly improved global fit with geological and geophysical observations while performing consistently with geodynamic constraints. Critically, APM models derived by limiting average rates of NLR to ~0.05°/Myr and absolute TM velocities to ~27‐mm/year fit geological observations including hotspot tracks. This suggests that this range of NLR and TM estimates may be appropriate for Earth over the last 220 Myr, providing a key step toward the practical integration of numerical geodynamics into plate tectonic reconstructions.

     
    more » « less
  5. Abstract

    We describe the results of a new reverberation mapping program focused on the nearby Seyfert galaxy NGC 3227. Photometric and spectroscopic monitoring was carried out from 2022 December to 2023 June with the Las Cumbres Observatory network of telescopes. We detected time delays in several optical broad emission lines, with Hβhaving the longest delay atτcent=4.00.9+0.9days and Heiihaving the shortest delay withτcent=0.90.8+1.1days. We also detect velocity-resolved behavior of the Hβemission line, with different line-of-sight velocities corresponding to different observed time delays. Combining the integrated Hβtime delay with the width of the variable component of the emission line and a standard scale factor suggests a black hole mass ofMBH=1.10.3+0.2×107M. Modeling of the full velocity-resolved response of the Hβemission line with the phenomenological codeCARAMELfinds a similar mass ofMBH=1.20.7+1.5×107Mand suggests that the Hβ-emitting broad-line region (BLR) may be represented by a biconical or flared disk structure that we are viewing at an inclination angle ofθi≈ 33° and with gas motions that are dominated by rotation. The new photoionization-based BLR modeling toolBELMACfinds general agreement with the observations when assuming the best-fitCARAMELresults; however,BELMACprefers a thick-disk geometry and kinematics that are equally composed of rotation and inflow. Both codes infer a radially extended and flattened BLR that is not outflowing.

     
    more » « less