skip to main content

Title: Behavior Associations in Lone Actor Terrorists
Terrorist attacks carried out by individuals or single cells have significantly accelerated over the last 20 years. This type of terrorism, defined as lone-actor (LA) terrorism, stands as one of the greatest security threats of our time. Research on LA behavior and characteristics has emerged and accelerated over the last decade. While these studies have produced valuable information on demographics, behavior, classifications, and warning signs, the relationship among these characters are yet to be addressed. Moreover, the means of radicalization and attacking have changed over decades. This study first identifies 25 binary behavioral characteristics of LAs and analyzes 192 LAs recorded on three different databases. Next, the classification is carried out according to first ideology, then to incident scene behavior via a virtual attacker-defender game, and, finally, according to the clusters obtained from the data. In addition, within each class, statistically significant associations and temporal relations are extracted using the A-priori algorithm. These associations would be instrumental in identifying the attacker type and intervene at the right time. The results indicate that while pre-9/11 LAs were mostly radicalized by the people in their environment, post-9/11 LAs are more diverse. Furthermore, the association chains for different LA types present unique characteristic more » pathways to violence and after-attack behavior. « less
Authors:
; ;
Award ID(s):
1901721
Publication Date:
NSF-PAR ID:
10291807
Journal Name:
Terrorism and Political Violence
ISSN:
0954-6553
Sponsoring Org:
National Science Foundation
More Like this
  1. Context. Fast radio bursts (FRBs) are extremely energetic pulses of millisecond duration and unknown origin. To understand the phenomenon that emits these pulses, targeted and un-targeted searches have been performed for multiwavelength counterparts, including the optical. Aims. The objective of this work is to search for optical transients at the positions of eight well-localized (< 1″) FRBs after the arrival of the burst on different timescales (typically at one day, several months, and one year after FRB detection). We then compare this with known optical light curves to constrain progenitor models. Methods. We used the Las Cumbres Observatory Global Telescope (LCOGT) network to promptly take images with its network of 23 telescopes working around the world. We used a template subtraction technique to analyze all the images collected at differing epochs. We have divided the difference images into two groups: In one group we use the image of the last epoch as a template, and in the other group we use the image of the first epoch as a template. We then searched for optical transients at the localizations of the FRBs in the template subtracted images. Results. We have found no optical transients and have therefore set limiting magnitudesmore »to the optical counterparts. Typical limits in apparent and absolute magnitudes for our LCOGT data are ∼22 and −19 mag in the r band, respectively. We have compared our limiting magnitudes with light curves of super-luminous supernovae (SLSNe), Type Ia supernovae (SNe Ia), supernovae associated with gamma-ray bursts (GRB-SNe), a kilonova, and tidal disruption events (TDEs). Conclusions. Assuming that the FRB emission coincides with the time of explosion of these transients, we rule out associations with SLSNe (at the ∼99.9% confidence level) and the brightest subtypes of SNe Ia, GRB-SNe, and TDEs (at a similar confidence level). However, we cannot exclude scenarios where FRBs are directly associated with the faintest of these subtypes or with kilonovae.« less
  2. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should producemore »identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues.« less
  3. Abstract
    <p>This data set for the manuscript entitled &#34;Design of Peptides that Fold and Self-Assemble on Graphite&#34; includes all files needed to run and analyze the simulations described in the this manuscript in the molecular dynamics software NAMD, as well as the output of the simulations. The files are organized into directories corresponding to the figures of the main text and supporting information. They include molecular model structure files (NAMD psf or Amber prmtop format), force field parameter files (in CHARMM format), initial atomic coordinates (pdb format), NAMD configuration files, Colvars configuration files, NAMD log files, and NAMD output including restart files (in binary NAMD format) and trajectories in dcd format (downsampled to 10 ns per frame). Analysis is controlled by shell scripts (Bash-compatible) that call VMD Tcl scripts or python scripts. These scripts and their output are also included.</p> <p>Version: 2.0</p> <p>Changes versus version 1.0 are the addition of the free energy of folding, adsorption, and pairing calculations (Sim_Figure-7) and shifting of the figure numbers to accommodate this addition.</p> <p><br /> Conventions Used in These Files<br /> &#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;&#61;</p> <p>Structure Files<br /> ----------------<br /> - graph_*.psf or sol_*.psf (original NAMD (XPLOR?) format psf file including atom details (type, charge, mass),More>>
  4. International Ocean Discovery Program (IODP) Expedition 386, Japan Trench Paleoseismology (offshore period: 13 April to 1 June 2021; Onshore Science Party: 14 February to 14 March 2022) was designed to test the concept of submarine paleoseismology in the Japan Trench, the area where the last, and globally only one out of four instrumentally-recorded, giant (i.e. magnitude 9 class) earthquake occurred back in 2011. “Submarine paleoseismology” is a promising approach to investigate deposits from the deep sea, where earthquakes leave traces preserved in the stratigraphic succession, to reconstruct the long-term history of earthquakes and to deliver observational data that help to reduce uncertainties in seismic hazard assessment for long return periods. This expedition marks the first time, giant piston coring (GPC) was used in IODP, and also the first time, partner IODP implementing organizations cooperated in jointly implementing a mission-specific platform expedition. We successfully collected 29 GPCs at 15 sites (1 to 3 holes each; total core recovery 831 meters), recovering 20 to 40-meter-long, continuous, upper Pleistocene to Holocene stratigraphic successions of 11 individual trench-fill basins along an axis-parallel transect from 36°N – 40.4°N, at water depth between 7445-8023 m below sea level. These offshore expedition achievements reveal the first high-temporalmore »and high spatial resolution investigation and sampling of a hadal oceanic trench, that form the deepest and least explored environments on our planet. The cores are currently being examined by multimethod applications to characterize and date hadal trench sediments and extreme event deposits, for which the detailed sedimentological, physical and (bio-)geochemical features, stratigraphic expressions and spatiotemporal distribution will be analyzed for proxy evidence of giant earthquakes and (bio-)geochemical cycling in deep sea sediments. Initial preliminary results presented in this EGU presentation reveal event-stratigraphic successions comprising several 10s of potentially giant-earthquake related event beds, revealing a fascinating record that will unravel the earthquake history of the different along-strike segments that is 10–100 times longer than currently available information. Post-Expedition research projects further analyzing these initial IODP data sets will (i) enable statistically robust assessment of the recurrence patterns of giant earthquakes, there while advancing our understanding of earthquake induced geohazards along subduction zones and (ii) provide new constraints on sediment and carbon flux of event-triggered sediment mobilization to a deep-sea trench and its influence on the hadal environment. IODP Expedition 386 Science Party: Piero Bellanova; Morgane Brunet; Zhirong Cai; Antonio Cattaneo; Tae Soo Chang; Kanhsi Hsiung; Takashi Ishizawa; Takuya Itaki; Kana Jitsuno; Joel Johnson; Toshiya Kanamatsu; Myra Keep; Arata Kioka; Christian Maerz; Cecilia McHugh; Aaron Micallef; Luo Min; Dhananjai Pandey; Jean Noel Proust; Troy Rasbury; Natascha Riedinger; Rui Bao; Yasufumi Satoguchi; Derek Sawyer; Chloe Seibert; Maxwell Silver; Susanne Straub; Joonas Virtasalo; Yonghong Wang; Ting-Wei Wu; Sarah Zellers« less
  5. Abstract: Jury notetaking can be controversial despite evidence suggesting benefits for recall and understanding. Research on note taking has historically focused on the deliberation process. Yet, little research explores the notes themselves. We developed a 10-item coding guide to explore what jurors take notes on (e.g., simple vs. complex evidence) and how they take notes (e.g., gist vs. specific representation). In general, jurors made gist representations of simple and complex information in their notes. This finding is consistent with Fuzzy Trace Theory (Reyna & Brainerd, 1995) and suggests notes may serve as a general memory aid, rather than verbatim representation. Summary: The practice of jury notetaking in the courtroom is often contested. Some states allow it (e.g., Nebraska: State v. Kipf, 1990), while others forbid it (e.g., Louisiana: La. Code of Crim. Proc., Art. 793). Some argue notes may serve as a memory aid, increase juror confidence during deliberation, and help jurors engage in the trial (Hannaford & Munsterman, 2001; Heuer & Penrod, 1988, 1994). Others argue notetaking may distract jurors from listening to evidence, that juror notes may be given undue weight, and that those who took notes may dictate the deliberation process (Dann, Hans, & Kaye, 2005). Whilemore »research has evaluated the efficacy of juror notes on evidence comprehension, little work has explored the specific content of juror notes. In a similar project on which we build, Dann, Hans, and Kaye (2005) found jurors took on average 270 words of notes each with 85% including references to jury instructions in their notes. In the present study we use a content analysis approach to examine how jurors take notes about simple and complex evidence. We were particularly interested in how jurors captured gist and specific (verbatim) information in their notes as they have different implications for information recall during deliberation. According to Fuzzy Trace Theory (Reyna & Brainerd, 1995), people extract “gist” or qualitative meaning from information, and also exact, verbatim representations. Although both are important for helping people make well-informed judgments, gist-based understandings are purported to be even more important than verbatim understanding (Reyna, 2008; Reyna & Brainer, 2007). As such, it could be useful to examine how laypeople represent information in their notes during deliberation of evidence. Methods Prior to watching a 45-minute mock bank robbery trial, jurors were given a pen and notepad and instructed they were permitted to take notes. The evidence included testimony from the defendant, witnesses, and expert witnesses from prosecution and defense. Expert testimony described complex mitochondrial DNA (mtDNA) evidence. The present analysis consists of pilot data representing 2,733 lines of notes from 52 randomly-selected jurors across 41 mock juries. Our final sample for presentation at AP-LS will consist of all 391 juror notes in our dataset. Based on previous research exploring jury note taking as well as our specific interest in gist vs. specific encoding of information, we developed a coding guide to quantify juror note-taking behaviors. Four researchers independently coded a subset of notes. Coders achieved acceptable interrater reliability [(Cronbach’s Alpha = .80-.92) on all variables across 20% of cases]. Prior to AP-LS, we will link juror notes with how they discuss scientific and non-scientific evidence during jury deliberation. Coding Note length. Before coding for content, coders counted lines of text. Each notepad line with at minimum one complete word was coded as a line of text. Gist information vs. Specific information. Any line referencing evidence was coded as gist or specific. We coded gist information as information that did not contain any specific details but summarized the meaning of the evidence (e.g., “bad, not many people excluded”). Specific information was coded as such if it contained a verbatim descriptive (e.g.,“<1 of people could be excluded”). We further coded whether this information was related to non-scientific evidence or related to the scientific DNA evidence. Mentions of DNA Evidence vs. Other Evidence. We were specifically interested in whether jurors mentioned the DNA evidence and how they captured complex evidence. When DNA evidence was mention we coded the content of the DNA reference. Mentions of the characteristics of mtDNA vs nDNA, the DNA match process or who could be excluded, heteroplasmy, references to database size, and other references were coded. Reliability. When referencing DNA evidence, we were interested in whether jurors mentioned the evidence reliability. Any specific mention of reliability of DNA evidence was noted (e.g., “MT DNA is not as powerful, more prone to error”). Expert Qualification. Finally, we were interested in whether jurors noted an expert’s qualifications. All references were coded (e.g., “Forensic analyst”). Results On average, jurors took 53 lines of notes (range: 3-137 lines). Most (83%) mentioned jury instructions before moving on to case specific information. The majority of references to evidence were gist references (54%) focusing on non-scientific evidence and scientific expert testimony equally (50%). When jurors encoded information using specific references (46%), they referenced non-scientific evidence and expert testimony equally as well (50%). Thirty-three percent of lines were devoted to expert testimony with every juror including at least one line. References to the DNA evidence were usually focused on who could be excluded from the FBIs database (43%), followed by references to differences between mtDNA vs nDNA (30%), and mentions of the size of the database (11%). Less frequently, references to DNA evidence focused on heteroplasmy (5%). Of those references that did not fit into a coding category (11%), most focused on the DNA extraction process, general information about DNA, and the uniqueness of DNA. We further coded references to DNA reliability (15%) as well as references to specific statistical information (14%). Finally, 40% of jurors made reference to an expert’s qualifications. Conclusion Jury note content analysis can reveal important information about how jurors capture trial information (e.g., gist vs verbatim), what evidence they consider important, and what they consider relevant and irrelevant. In our case, it appeared jurors largely created gist representations of information that focused equally on non-scientific evidence and scientific expert testimony. This finding suggests note taking may serve not only to represent information verbatim, but also and perhaps mostly as a general memory aid summarizing the meaning of evidence. Further, jurors’ references to evidence tended to be equally focused on the non-scientific evidence and the scientifically complex DNA evidence. This observation suggests jurors may attend just as much to non-scientific evidence as they to do complex scientific evidence in cases involving complicated evidence – an observation that might inform future work on understanding how jurors interpret evidence in cases with complex information. Learning objective: Participants will be able to describe emerging evidence about how jurors take notes during trial.« less