skip to main content


Title: Every contact leaves a trace: Documenting contamination in lithic residue studies at the Middle Palaeolithic sites of Lusakert Cave 1 (Armenia) and Crvena Stijena (Montenegro)
Investigations of organic lithic micro-residues have, over the last decade, shifted from entirely morphological observations using visible-light microscopy to compositional ones using scanning electron microscopy and Fourier-transform infrared microspectroscopy, providing a seemingly objective chemical basis for residue identifications. Contamination, though, remains a problem that can affect these results. Modern contaminants, accumulated during the post-excavation lives of artifacts, are pervasive, subtle, and even “invisible” (unlisted ingredients in common lab products). Ancient contamination is a second issue. The aim of residue analysis is to recognize residues related to use, but other types of residues can also accumulate on artifacts. Caves are subject to various taphonomic forces and organic inputs, and use-related residues can degrade into secondary compounds. This organic “background noise” must be taken into consideration. Here we show that residue contamination is more pervasive than is often appreciated, as revealed by our studies of Middle Palaeolithic artifacts from two sites: Lusakert Cave 1 in Armenia and Crvena Stijena in Montenegro. First, we explain how artifacts from Lusakert Cave 1, despite being handled following specialized protocols, were tainted by a modern-day contaminant from an unanticipated source: a release agent used inside the zip-top bags that are ubiquitous in the field and lab. Second, we document that, when non-artifact “controls” are studied alongside artifacts from Crvena Stijena, comparisons reveal that organic residues are adhered to both, indicating that they are prevalent throughout the sediments and not necessarily related to use. We provide suggestions for reducing contamination and increasing the reliability of residue studies. Ultimately, we propose that archaeologists working in the field of residue studies must start with the null hypothesis that miniscule organic residues reflect contamination, either ancient or modern, and systematically proceed to rule out all possible contaminants before interpreting them as evidence of an artifact’s use in the distant past.  more » « less
Award ID(s):
2011401
NSF-PAR ID:
10411324
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Editor(s):
Peresani, Marco
Date Published:
Journal Name:
PLOS ONE
Volume:
17
Issue:
4
ISSN:
1932-6203
Page Range / eLocation ID:
e0266362
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Over the last decade, several hyper-scale data center companies such as Google, Facebook, and Microsoft have demonstrated the cost-saving capabilities of airside economization with direct/indirect heat exchangers by moving to chiller-less air-cooled data centers. Under pressure from data center owners, information technology equipment OEMs like Dell and IBM are developing information technology equipment that can withstand peak excursion temperature ratings of up to 45 °C, clearly outside the recommended envelope, and into ASHRAEs A4 allowable envelope. As popular and widespread as these cooling technologies are becoming, airside economization comes with its challenges. There is a risk of premature hardware failures or reliability degradation posed by uncontrolled fine particulate and gaseous contaminants in presence of temperature and humidity transients. This paper presents an in-depth review of the particulate and gaseous contamination-related challenges faced by the modern-day data center facilities that use airside economization. This review summarizes specific experimental and computational studies to characterize the airborne contaminants and associated failure modes and mechanisms. In addition, standard lab-based and in-situ test methods for measuring the corrosive effects of the particles and the corrosive gases, as the means of testing the robustness of the equipment against these contaminants, under different temperature and relative humidity conditions are also reviewed. It also outlines the cost-sensitive mitigation techniques like improved filtration strategies and methods that can be utilized for efficient implementation of airside economization. 
    more » « less
  2. Dietary DNA metabarcoding enables researchers to identify and characterize trophic interactions with a high degree of taxonomic precision. It is also sensitive to sources of bias and contamination in the field and lab. One of the earliest and most common strategies for dealing with such sensitivities has been to filter resulting sequence data to remove low-abundance sequences before conducting ecological analyses based on the presence or absence of food taxa. Although this step is now often perceived to be both necessary and sufficient for cleaning up datasets, evidence to support this perception is lacking and more attention needs to be paid to the related risk of introducing other undesirable errors. Using computer simulations, we demonstrate that common strategies to remove low-abundance sequences can erroneously eliminate true dietary sequences in ways that impact downstream dietary inferences. Using real data from well-studied wildlife populations in Yellowstone National Park, we further show how these strategies can markedly alter the composition of individual dietary profiles in ways that scale-up to obscure ecological interpretations about dietary generalism, specialism, and niche partitioning. Although the practice of removing low-abundance sequences may continue to be a useful strategy to address a subset of research questions that focus on a subset of relatively abundant food resources, its continued widespread use risks generating misleading perceptions about the structure of trophic networks. Researchers working with dietary DNA metabarcoding data—or similar data such as environmental DNA, microbiomes, or pathobiomes—should be aware of potential drawbacks and consider alternative bioinformatic, experimental, and statistical solutions. We used fecal DNA metabarcoding to characterize the diets of bison and bighorn sheep in winter and summer. Our analyses are based on 35 samples (median per species per season = 10) analyzed using the P6 loop of the chloroplast trnL(UAA) intron together with publicly available plant reference data (Illumina sequence read data are available at NCBI (BioProject: PRJNA780500)). Obicut was used to trim reads with a minimum quality threshold of 30, and primers were removed from forward and reverse reads using cutadapt. All further sequence identifications were performed using obitools; forward and reverse sequences were aligned using the illuminapairedend command using a minimum alignment score of 40, and only joined sequences retained. We used the obiuniq command to group identical sequences and tally them within samples, enabling us to quantify the relative read abundance (RRA) of each sequence. Sequences that occurred ≤2 times overall or that were ≤8 bp were discarded. Sequences were considered to be likely PCR artifacts if they were highly similar to another sequence (1 bp difference) and had a much lower abundance (0.05%) in the majority of samples in which they occurred; we discarded these sequences using the obiclean command. Overall, we characterized 357 plant sequences and a subset of 355 sequences were retained in the dataset after rarefying samples to equal sequencing depth. We then applied relative read abundance thresholds from 0% to 5% to the fecal samples. We compared differences in the inferred dietary richness within and between species based on individual samples, based on average richness across samples, and based on the total richness of each population after accounting for differences in sample size. The readme file contains an explanation of each of the variables in the dataset. Information on the methodology can be found in the associated manuscript referenced above.  
    more » « less
  3. Recent developments in speleothem science are showing their potential for paleofire reconstruction through a variety of inorganic and organic proxies including trace metals (1) and the pyrogenic organic compound levoglucosan (2). Previous work by Argiriadis et al. (2019) presented a method for the analysis of trace polycyclic aromatic hydrocarbons (PAHs) and n -alkanes in stalagmites (3). These compounds reflect biogeochemical processes occurring at the land surface, in the soil, and in the cave. PAHs are primarily related to combustion of biomass while n-alkanes, with their potential for vegetation reconstruction (4), provide information on fuel availability and composition, as well as fire activity. These organic molecules are carried downward by infiltrating water and incorporated into speleothems (5), thereby creating the potential to serve as novel paleofire archives. Using this approach, we developed a high-resolution stalagmite record of paleofire activity from cave KNI-51 in tropical northwestern Australia. This site is well suited for high resolution paleofire reconstruction as bushfire activity in this tropical savanna is some of the highest on the continent, the cave is shallow and overlain by extremely thin soils, and the stalagmites are fast-growing (1-2 mm yr-1) and precisely dated. We analyzed three stalagmites which grew continuously in different time intervals through the last millennium - KNI-51-F (CE ~1100-1620), KNI-51-G (CE ~1320-1640), and KNI-51-11 (CE ~1750-2009). Samples were drilled continuously at 1-3 mm resolution from stalagmite slabs, processed in a stainless-steel cleanroom to prevent contamination. Despite a difference in resolution between stalagmites KNI-51-F and -G, peaks in the target compounds show good replication in the overlapping time interval of the two stalagmites, and PAH abundances in a portion of stalagmite KNI-51-11 that grew from CE 2000-2009 are well correlated with satellite-mapped fires occurring proximally to the cave. Our results suggest an increase in the frequency of low intensity fire in the 20th century relative to much of the previous millennium. The timing of this shift is broadly coincident with the arrival of European pastoralists in the late 19th century and the subsequent displacement of Aboriginal peoples from the land. Aboriginal peoples had previously utilized “fire stick farming”, a method of prescribed, low intensity burning, that was an important influence of ecology, biomass, and fire. Prior to the late 1800s, the period with the most frequent low intensity fire activity was the 13th century, the wettest interval of the entire record. Peak high intensity fire activity occurred during the 12th century. Controlled burn and irrigation experiments capable of examining the transmission of pyrogenic compounds from the land surface to cave dripwater represent the next step in this analysis. Given that karst is present in many fire-prone environments, and that stalagmites can be precisely dated and grow continuously for millennia, the potential utility of a stalagmite-based paleofire proxy is high. (1) L.K. McDonough et al., Geochim. Cosmochim. Acta. 325, 258–277 (2022). (2) J. Homann et al., Nat. Commun., 13:7175 (2022). (3) E. Argiriadis et al., Anal. Chem. 91, 7007–7011 (2019). (4) R.T. Bush, F. A. McInerney, Geochim. Cosmochim. Acta. 117, 161–179 (2013). (5) Y. Sun et al., Chemosphere. 230, 616–627 (2019). 
    more » « less
  4. Summary Lay Description

    Asphalt binder, or bitumen, is the glue that holds aggregate particles together to form a road surface. It is derived from the heavy residue that remains after distilling gasoline, diesel and other lighter products out of crude oil. Nevertheless, bitumen varies widely in composition and mechanical properties. To avoid expensive road failures, bitumen must be processed after distillation so that its mechanical properties satisfy diverse climate and load requirements. International standards now guide these mechanical properties, but yield varying long‐term performance as local source composition and preparation methods vary.In situdiagnostic methods that can predict bitumen performance independently of processing history are therefore needed. The present work focuses on one promising diagnostic candidate: microscopic observation of internal bitumen structure. Past bitumen microscopy has revealed microstructures of widely varying composition, size, shape and density. A challenge is distinguishing bulk microstructures, which directly influence a binder's mechanical properties, from surface microstructures, which often dominate optical microscopy because of bitumen's opacity and scanning‐probe microscopy because of its inherent surface specificity. In previously published work, we used infrared microscopy to enhance visibility of bulk microstructure. Here, as a foil to this work, we use visible‐wavelength microscopy together with atomic‐force microscopy (AFM) specifically to isolatesurfacemicrostructure, to understand its distinct origin and morphology, and to demonstrate its unique sensitivity to surface alterations. To this end, optical microscopy complements AFM by enabling us to observe surface microstructures form at temperatures (50°C–70°C) at which bitumen's fluidity prevents AFM, and to observe surface microstructure beneath transparent, but chemically inert, liquid (glycerol) and solid (glass) overlayers, which alter surface tension compared to free surfaces. From this study, we learned, first, that, as bitumen cools, distinctly wrinkled surface microstructures form at the same temperature at which independent calorimetric studies showed crystallization in bitumen, causing it to release latent heat of crystallization. This shows that surface microstructures are likely precipitates of the crystallizable component(s). Second, a glycerol overlayer on the cooling bitumen results in smaller, less wrinkled, sparser microstructures, whereas a glass overlayer suppresses them altogether. In contrast, underlying smaller bulk microstructures are unaffected. This shows that surface tension is the driving force behind formation and wrinkling of surface precipitates. Taken together, the work advances our ability to diagnose bitumen samples noninvasively by clearly distinguishing surface from bulk microstructure.

     
    more » « less
  5. Abstract

    In recent years, concerns have been raised regarding the contamination of grapes with pesticide residues. As consumer demand for safer food products grows, regular monitoring of pesticide residues in food has become essential. This study sought to develop a rapid and sensitive technique for detecting two specific pesticides (phosmet and paraquat) present on the grape surface using the surface‐enhanced Raman spectroscopy (SERS) method. Gold nanostars (AuNS) particles were synthesized, featuring spiky tips that act as hot spots for localized surface plasmon resonance, thereby enhancing Raman signals. Additionally, the roughened surface of AuNS increases the surface area, resulting in improved interactions between the substrate and analyte molecules. Prominent Raman peaks of mixed contaminants were acquired and used to characterize and quantify the pesticides. It was observed that the SERS intensity of the Raman peaks changed in proportion to the concentration ratio of phosmet and paraquat. Moreover, AuNS exhibited superior SERS enhancement compared to gold nanoparticles. The results demonstrate that the lowest detectable concentration for both pesticides on grape surfaces is 0.5 mg/kg. These findings suggest that SERS coupled with AuNS constitutes a practical and promising approach for detecting and quantifying trace contaminants in food.

    Practical Application

    This research established a novel surface‐enhanced Raman spectroscopy (SERS) method coupled with a simplified extraction protocol and gold nanostar substrates to detect trace levels of pesticides in fresh produce. The detection limits meet the maximum residue limits set by the EPA. This substrate has great potential for rapid measurements of chemical contaminants in foods.

     
    more » « less