skip to main content


Title: Sequencing and Analysis of the Entire Genome of the Mycoparasitic Bioeffector Fungus Trichoderma asperelloides Strain T 203 (Hypocreales)
ABSTRACT The filamentous mycoparasitic fungus Trichoderma asperelloides (Hypocreales, Ascomycota, Dikarya) strain T 203 was isolated from soil in Israel by the Ilan Chet group in the 1980s. As it has been the subject of laboratory, greenhouse, and field experiments and has been incorporated into commercial agricultural preparations, its genome has been sequenced and analyzed.  more » « less
Award ID(s):
1916137
NSF-PAR ID:
10350997
Author(s) / Creator(s):
; ; ; ; ; ;
Editor(s):
Stajich, Jason E.
Date Published:
Journal Name:
Microbiology Resource Announcements
Volume:
11
Issue:
2
ISSN:
2576-098X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. After attending this presentation, attendees will gain knowledge in the strategy to achieve high-throughput and simultaneous analysis of cannabinoids and appreciate a validated LC-UV method for analysis of twelve cannabinoids in hemp oil. This presentation will first impact the forensic science community by introducing three fast LC separations of twelve cannabinoids that can be used with either UV or mass spectrometric (MS) detection. It will further impact the forensic science community by introducing a validated LC-UV method for high-throughput and simultaneous analysis of twelve cannabinoids in hemp oil, which can be routinely used by cannabis testing labs. In recent years, the use of products of Cannabis sativa L. for medicinal purposes has been in a rapid growth, although their preparation procedure has not been clearly standardized and their quality has not been well regulated. To analyze the therapeutic components, i.e. cannabinoids, in products of Cannabis sativa L., LC-UV has been frequently used, because LC-UV is commonly available and usually appropriate for routine analysis by the cannabis growers and commercial suppliers. In the literature, a few validated LC-UV methods have been described. However, so far, all validated LC-UV methods only focused in the quantification of eleven or less cannabinoids. Therefore, a method able to simultaneously analyze more cannabinoids in a shorter run time is still in high demand, because more and more cannabinoids have been isolated and many of them have shown medicinal properties. In this study, the LC separation of twelve cannabinoids, including cannabichromene (CBC), cannabidiolic acid (CBDA), cannabidiol (CBD), cannabidivarinic acid (CBDVA), cannabidivarin (CBDV), cannabigerolic acid (CBGA), cannabigerol (CBG), cannabinol (CBN), delta-8 tetrahydrocannabinol (Δ8-THC), delta-9 tetrahydrocannabinolic acid A (Δ9-THCA A), delta-9 tetrahydrocannabinol (Δ9-THC), and tetrahydrocannabivarin (THCV), has been systematically optimized using a Phenomenex Luna Omega 3 µm Polar C18 150 mm × 4.6 mm column with regard to the effects of the type of organic solvent, i.e. methanol and acetonitrile, the content of the organic solvent, and the pH of the mobile phase. The optimization has resulted in three LC conditions at 1.0 mL/minute able to separate the twelve cannabinoids: 1) a mobile phase consisting of water and methanol, both containing 0.1% formic acid (pH 2.69), with a gradient elution at 75% methanol for the first 3 minutes and then linearly increase to 100% methanol at 12.5 minutes; 2) a mobile phase consisting of water and 90% (v/v) acetonitrile in water, both containing 0.1% formic acid and 20 mM ammonium formate (pH 3.69), with an isocratic elution at 75% acetonitrile for 14 minutes; and 3) a mobile phase consisting of water and 90% (v/v) acetonitrile in water, both containing 0.03% formic acid and 20 mM ammonium formate (pH 4.20), with an isocratic elution at 75% acetonitrile for 14 minutes. In order to demonstrate the effectiveness of the achieved LC separations, a LC-UV method is further validated for the high-throughput and simultaneous analysis of twelve cannabinoids. The method used the mobile phase at pH 3.69, which resulted in significant improvement in throughput compared to other validated LC-UV methods published so far. The method used flurbiprofen as the internal standard. The linear calibration range of all the cannabinoids were between 0.1 to 25 ppm with R2≥0.9993. The LOQ (S/N=10) of the cannabinoids was between 17.8 and 74.2 ppb. The validation used a hemp oil containing 3.2 wt% CBD and no other cannabinoids, which was reported by the vendor with a certificate of analysis, as the matrix to prepare control samples: the hemp oil was first extracted using liquid-liquid extraction (LLE) with methanol; cannabinoids were then spiked into the extract at both 0.5 ppm and 5 ppm level. Afterwards, the recovery, precision (%RSD) and accuracy (%Error) of the control samples were assessed and the results met the requirements by the ISO/IEC 17025 and ASTM E2549-14 guidelines. 
    more » « less
  2. Abstract Three-dimensional (3D) extrusion printing of cellular/acellular structures with biocompatible materials has been widely investigated in recent years. However, the requirement of a suitable solidification rate of printable ink materials constrains the utilization of extrusion-based 3D printing techniques. In this study, the nanoclay yield-stress suspension-enabled extrusion-based 3D printing system has been investigated and demonstrated to overcome solidification rate constraints during printing. Utilizing the liquid–solid transition property of nanoclay suspension, two fabrication approaches, including nanoclay support bath-enabled printing and nanoclay-enabled direct printing, have been proposed. For the former approach, nanoclay (Laponite® EP) has been used as a support bath material to fabricate alginate-based tympanic membrane patches. The constituents of alginate-based ink have been investigated to have the desired mechanical property of alginate-based tympanic membrane patches and facilitate the printing process. For the latter approach, nanoclay (Laponite® XLG) has been used as an internal scaffold material to help print poly (ethylene glycol) diacrylate (PEGDA)-based neural chambers, which can be further cross-linked in air. Mechanical stress analysis has been performed to explore the geometric limitation of printable Laponite® XLG-PEGDA neural chambers. 
    more » « less
  3. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1. The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies files most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data. A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some records start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure. After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file. Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate. Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work. To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5]. The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed. The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions. The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020. The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/. 
    more » « less
  4. null (Ed.)
    Three-dimensional (3D) extrusion printing of cellular/acellular structures with biocompatible materials has been widely investigated in recent years. However, the requirement of suitable solidification rate of printable ink materials constrains the utilization of extrusion-based 3D printing technique. In this study, the yield-stress nanoclay suspension-enabled extrusion-based 3D printing system has been investigated and demonstrated to overcome solidification rate constraints during printing. Utilizing the liquid-solid transition property of nanoclay suspension, two fabrication approaches, including nanoclay support bath-enabled printing and nanoclay-enabled direct printing, have been proposed. For the former approach, nanoclay (Laponite EP) has been used as a support bath material to fabricate alginate-based tympanic membrane patches. The constituents of alginate-based ink have been investigated to have the desired mechanical property of alginate-based tympanic membrane patches and facilitate the printing process. For the latter approach, nanoclay (Laponite XLG) has been used as an internal scaffold material to help print poly (ethylene glycol) diacrylate (PEGDA)-based neural chambers, which can be further cross-linked in air. Mechanical stress analysis has been performed to explore the geometric limitation of printable Laponite XLG-PEGDA neural chambers. 
    more » « less
  5. This theory paper focuses on understanding how mastery learning has been implemented in undergraduate engineering courses through a systematic review. Academic environments that promote learning, mastery, and continuous improvement rather than inherent ability can promote performance and persistence. Scholarship has argued that students could achieve mastery of the course material when the time available to master concepts and the quality of instruction was made appropriate to each learner. Increasing time to demonstrate mastery involves a course structure that allows for repeated attempts on learning assessments (i.e., homework, quizzes, projects, exams). Students are not penalized for failed attempts but are rewarded for achieving eventual mastery. The mastery learning approach recognizes that mastery is not always achieved on first attempts and learning from mistakes and persisting is fundamental to how we learn. This singular concept has potentially the greatest impact on students’ mindset in terms of their belief they can be successful in learning the course material. A significant amount of attention has been given to mastery learning courses in secondary education and mastery learning has shown an exceptionally positive effect on student achievement. However, implementing mastery learning in an undergraduate course can be a cumbersome process as it requires instructors to significantly restructure their assignments and exams, evaluation process, and grading practices. In light of these challenges, it is unclear the extent to which mastery learning has been implemented in undergraduate engineering courses or if similar positive effects can be found. Therefore, we conducted a systematic review to elucidate, how in the U.S., (1) has mastery learning been implemented in undergraduate engineering courses from 1990 to the present time and (2) the student outcomes that have been reported for these implementations. Using the systematic process outlined by Borrego et al. (2014), we surveyed seven databases and a total of 584 articles consisting of engineering and non-engineering courses were identified. We focused our review on studies that were centered on applying the mastery learning pedagogical method in undergraduate engineering courses. All peer-reviewed and practitioner articles and conference proceedings that were within our scope were included in the synthetization phase of the review. Most articles were excluded based on our inclusion and exclusion criteria. Twelve studies focused on applying mastery learning to undergraduate engineering courses. The mastery learning method was mainly applied on midterm exams, few studies used the method on homework assignments, and no study applied the method to the final exam. Students reported an increase in learning as a result of applying mastery learning. Several studies reported that students’ grades in a traditional final exam were not affected by mastery learning. Students’ self-reported evaluation of the course suggests that students prefer the mastery learning approach over traditional methods. Although a clear consensus on the effect of the mastery learning approach could not be achieved as each article applied different survey instruments to capture students’ perspectives. Responses to open-ended questions have mixed results. Two studies report more positive student comments on opened-ended questions, while one study report receiving more negative comments regarding the implementation of the mastery learning method. In the full paper we more thoroughly describe the ways in which mastery learning was implemented along with clear examples of common and divergent student outcomes across the twelve studies. 
    more » « less