skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: High-throughput proteomics: a methodological mini-review
Proteomics plays a vital role in biomedical research in the post-genomic era. With the technological revolution and emerging computational and statistic models, proteomic methodology has evolved rapidly in the past decade and shed light on solving complicated biomedical problems. Here, we summarize scientific research and clinical practice of existing and emerging high-throughput proteomics approaches, including mass spectrometry, protein pathway array, next-generation tissue microarrays, single-cell proteomics, single-molecule proteomics, Luminex, Simoa and Olink Proteomics. We also discuss important computational methods and statistical algorithms that can maximize the mining of proteomic data with clinical and/or other 'omics data. Various principles and precautions are provided for better utilization of these tools. In summary, the advances in high-throughput proteomics will not only help better understand the molecular mechanisms of pathogenesis, but also to identify the signature signaling networks of specific diseases. Thus, modern proteomics have a range of potential applications in basic research, prognostic oncology, precision medicine, and drug discovery.  more » « less
Award ID(s):
2128307
PAR ID:
10346760
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Laboratory Investigation
ISSN:
0023-6837
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Sex estimation of skeletons is fundamental to many archaeological studies. Currently, three approaches are available to estimate sex–osteology, genomics, or proteomics, but little is known about the relative reliability of these methods in applied settings. We present matching osteological, shotgun-genomic, and proteomic data to estimate the sex of 55 individuals, each with an independent radiocarbon date between 2,440 and 100 cal BP, from two ancestral Ohlone sites in Central California. Sex estimation was possible in 100% of this burial sample using proteomics, in 91% using genomics, and in 51% using osteology. Agreement between the methods was high, however conflicts did occur. Genomic sex estimates were 100% consistent with proteomic and osteological estimates when DNA reads were above 100,000 total sequences. However, more than half the samples had DNA read numbers below this threshold, producing high rates of conflict with osteological and proteomic data where nine out of twenty conditional DNA sex estimates conflicted with proteomics. While the DNA signal decreased by an order of magnitude in the older burial samples, there was no decrease in proteomic signal. We conclude that proteomics provides an important complement to osteological and shotgun-genomic sex estimation. 
    more » « less
  2. null (Ed.)
    Mass spectrometry (MS)-based proteomics has enabled the identification and quantification of thousands of proteins from complex proteomes in a single experiment. However, its performance for mass-limited proteome samples ( e.g. , single cells and tissue samples from laser capture microdissection) is still not satisfying. The development of novel proteomic methodologies with better overall sensitivity is vital. During the last several years, substantial technical progress has been achieved for the preparation and liquid-phase separation-MS characterization of mass-limited proteome samples. In this review, we summarize recent technological progress of sample preparation, liquid chromatography (LC)-MS, capillary zone electrophoresis (CZE)-MS and MS instrumentation for bottom-up proteomics of trace biological samples, highlight some exciting applications of the novel techniques for single-cell proteomics, and provide a very brief perspective about the field at the end. 
    more » « less
  3. Many clinical procedures and biomedical research workflows rely on microscopy, including diagnosis of cancer, genetic disorders, autoimmune diseases, infections, and quantification of cell culture. Despite its widespread use, traditional image acquisition and review by trained microscopists is often lengthy and expensive, limited to large hospitals or laboratories, precluding use in point‐of‐care settings. In contrast, lensless or lensfree holographic microscopy (LHM) is inexpensive and widely deployable because it can achieve performance comparable to expensive and bulky objective‐based benchtop microscopes while relying on components that cost only a few hundred dollars or less. Lab‐on‐a‐chip integration is practical and enables LHM to be combined with single‐cell isolation, sample mixing, and in‐incubator imaging. Additionally, many manual tasks in conventional microscopy are instead computational in LHM, including image focusing, stitching, and classification. Furthermore, LHM offers a field of view hundreds of times greater than that of conventional microscopy without sacrificing resolution. Here, the basic LHM principles are summarized, as well as recent advances in artificial intelligence integration and enhanced resolution. How LHM is applied to the above clinical and biomedical applications is discussed in detail. Finally, emerging clinical applications, high‐impact areas for future research, and some current challenges facing widespread adoption are identified. 
    more » « less
  4. This Work-in-Progress paper in the Research Category uses a retrospective mixed-methods study to better understand the factors that mediate learning of computational modeling by life scientists. Key stakeholders, including leading scientists, universities and funding agencies, have promoted computational modeling to enable life sciences research and improve the translation of genetic and molecular biology high- throughput data into clinical results. Software platforms to facilitate computational modeling by biologists who lack advanced mathematical or programming skills have had some success, but none has achieved widespread use among life scientists. Because computational modeling is a core engineering skill of value to other STEM fields, it is critical for engineering and computer science educators to consider how we help students from across STEM disciplines learn computational modeling. Currently we lack sufficient research on how best to help life scientists learn computational modeling. To address this gap, in 2017, we observed a short-format summer course designed for life scientists to learn computational modeling. The course used a simulation environment designed to lower programming barriers. We used semi-structured interviews to understand students' experiences while taking the course and in applying computational modeling after the course. We conducted interviews with graduate students and post- doctoral researchers who had completed the course. We also interviewed students who took the course between 2010 and 2013. Among these past attendees, we selected equal numbers of interview subjects who had and had not successfully published journal articles that incorporated computational modeling. This Work-in-Progress paper applies social cognitive theory to analyze the motivations of life scientists who seek training in computational modeling and their attitudes towards computational modeling. Additionally, we identify important social and environmental variables that influence successful application of computational modeling after course completion. The findings from this study may therefore help us educate biomedical and biological engineering students more effectively. Although this study focuses on life scientists, its findings can inform engineering and computer science education more broadly. Insights from this study may be especially useful in aiding incoming engineering and computer science students who do not have advanced mathematical or programming skills and in preparing undergraduate engineering students for collaborative work with life scientists. 
    more » « less
  5. Mass spectrometry is the dominant technology in the field of proteomics, enabling high-throughput analysis of the protein content of complex biological samples. Due to the complexity of the instrumentation and resulting data, sophisticated computational methods are required for the processing and interpretation of acquired mass spectra. Machine learning has shown great promise to improve the analysis of mass spectrometry data, with numerous purpose-built methods for improving specific steps in the data acquisition and analysis pipeline reaching widespread adoption. Here, we propose unifying various spectrum prediction tasks under a single foundation model for mass spectra. To this end, we pre-train a spectrum encoder using de novo sequencing as a pre-training task. We then show that using these pre-trained spectrum representations improves our performance on the four downstream tasks of spectrum quality prediction, chimericity prediction, phosphorylation prediction, and glycosylation status prediction. Finally, we perform multi-task fine-tuning and find that this approach improves the performance on each task individually. Overall, our work demonstrates that a foundation model for tandem mass spectrometry proteomics trained on de novo sequencing learns generalizable representations of spectra, improves performance on downstream tasks where training data is limited, and can ultimately enhance data acquisition and analysis in proteomics experiments. 
    more » « less