Abstract The Spatial Data Lab (SDL) project is a collaborative initiative by the Center for Geographic Analysis at Harvard University, KNIME, Future Data Lab, China Data Institute, and George Mason University. Co-sponsored by the NSF IUCRC Spatiotemporal Innovation Center, SDL aims to advance applied research in spatiotemporal studies across various domains such as business, environment, health, mobility, and more. The project focuses on developing an open-source infrastructure for data linkage, analysis, and collaboration. Key objectives include building spatiotemporal data services, a reproducible, replicable, and expandable (RRE) platform, and workflow-driven data analysis tools to support research case studies. Additionally, SDL promotes spatiotemporal data science training, cross-party collaboration, and the creation of geospatial tools that foster inclusivity, transparency, and ethical practices. Guided by an academic advisory committee of world-renowned scholars, the project is laying the foundation for a more open, effective, and robust scientific enterprise. 
                        more » 
                        « less   
                    
                            
                            The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2022 update
                        
                    
    
            Abstract Galaxy is a mature, browser accessible workbench for scientific computing. It enables scientists to share, analyze and visualize their own data, with minimal technical impediments. A thriving global community continues to use, maintain and contribute to the project, with support from multiple national infrastructure providers that enable freely accessible analysis and training services. The Galaxy Training Network supports free, self-directed, virtual training with >230 integrated tutorials. Project engagement metrics have continued to grow over the last 2 years, including source code contributions, publications, software packages wrapped as tools, registered users and their daily analysis jobs, and new independent specialized servers. Key Galaxy technical developments include an improved user interface for launching large-scale analyses with many files, interactive tools for exploratory data analysis, and a complete suite of machine learning tools. Important scientific developments enabled by Galaxy include Vertebrate Genome Project (VGP) assembly workflows and global SARS-CoV-2 collaborations. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1931531
- PAR ID:
- 10465527
- Author(s) / Creator(s):
- ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
- Date Published:
- Journal Name:
- Nucleic Acids Research
- Volume:
- 50
- Issue:
- W1
- ISSN:
- 0305-1048
- Page Range / eLocation ID:
- W345 to W351
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            A series of international workshops held in 2014, 2017, 2019, and 2022 focused on improving tephra studies from field collection through publication and encouraging FAIR (findable, accessible, interoperable, reusable) data practices for tephra data and metadata. Two consensus needs for tephra studies emerged from the 2014 and 2017 workshops: (a) standardization of tephra field data collection, geochemical analysis, correlation, and data reporting, and (b) development of next generation computer tools and databases to facilitate information access across multidisciplinary communities. To achieve (a), we developed a series of recommendations for best practices in tephra studies, from sample collection through analysis and data reporting (https://zenodo.org/record/3866266). A 4-part virtual workshop series (https://tephrochronology.org/cot/Tephra2022/) was held in February and March, 2022, to update the tephra community on these developments, to get community feedback, to learn of unmet needs, and to plan a future roadmap for open and FAIR tephra data. More than 230 people from 25 nations registered for the workshop series. The community strongly emphasized the need for better computer systems, including physical infrastructure (repositories and servers), digital infrastructure (software and tools) and human infrastructure (people, training, and professional assistance), to store, manage and serve global tephra datasets. Some desired attributes of improved computer systems include: 1) user friendliness 2) ability to easily ingest multiparameter tephra data (using best practice recommended data fields); 3) interoperability with existing data repositories; 4) development of tool add-ons (plotting and statistics); 5) improved searchability 6) development of a tephra portal with access to distributed data systems, and 7) commitments to long-term support from funding agencies, publishers and the cyberinfrastructure community.more » « less
- 
            Abstract This paper presents the UCI Phonotactic Calculator (UCIPC), a new online tool for quantifying the occurrence of segments and segment sequences in a corpus. This tool has several advantages compared to existing tools: it allows users to supply their own training data, meaning it can be applied to any language for which a corpus is available; it computes a wider range of metrics than most existing tools; and it provides an accessible point-and-click interface that allows researchers with more modest technical backgrounds to take advantage of phonotactic models. After describing the metrics implemented by the calculator and how to use it, we present the results of a proof-of-concept study comparing how well different types of metrics implemented by the UCIPC predict human responses from eight published nonce word acceptability judgment studies across four different languages. These results suggest that metrics that take into account the relative position of sounds and include word boundaries are better at predicting human responses than those that are based on the absolute position of sounds and do not include word boundaries. We close by discussing the usefulness of tools like the UCIPC in experimental design and analysis and outline several areas of future research that this tool will help support.more » « less
- 
            As we look to the future of natural history collections and a global integration of biodiversity data, we are reliant on a diverse workforce with the skills necessary to build, grow, and support the data, tools, and resources of the Digital Extended Specimen (DES; Webster 2019, Lendemer et al. 2020, Hardisty 2020). Future “DES Data Curators” – those who will be charged with maintaining resources created through the DES – will require skills and resources beyond what is currently available to most natural history collections staff. In training the workforce to support the DES we have an opportunity to broaden our community and ensure that, through the expansion of biodiversity data, the workforce landscape itself is diverse, equitable, inclusive, and accessible. A fully-implemented DES will provide training that encapsulates capacity building, skills development, unifying protocols and best practices guidance, and cutting-edge technology that also creates inclusive, equitable, and accessible systems, workflows, and communities. As members of the biodiversity community and the current workforce, we can leverage our knowledge and skills to develop innovative training models that: include a range of educational settings and modalities; address the needs of new communities not currently engaged with digital data; from their onset, provide attribution for past and future work and do not perpetuate the legacy of colonial practices and historic inequalities found in many physical natural history collections. Recent reports from the Biodiversity Collections Network (BCoN 2019) and the National Academies of Science, Engineering and Medicine (National Academies of Sciences, Engineering, and Medicine 2020) specifically address workforce needs in support of the DES. To address workforce training and inclusivity within the context of global data integration, the Alliance for Biodiversity Knowledge included a topic on Workforce capacity development and inclusivity in Phase 2 of the consultation on Converging Digital Specimens and Extended Specimens - Towards a global specification for data integration. Across these efforts, several common themes have emerged relative to workforce training and the DES. A call for a community needs assessment: As a community, we have several unknowns related to the current collections workforce and training needs. We would benefit from a baseline assessment of collections professionals to define current job responsibilities, demographics, education and training, incentives, compensation, and benefits. This includes an evaluation of current employment prospects and opportunities. Defined skills and training for the 21st century collections professional: We need to be proactive and define the 21st century workforce skills necessary to support the development and implementation of the DES. When we define the skills and content needs we can create appropriate training opportunities that include scalable materials for capacity building, educational materials that develop relevant skills, unifying protocols across the DES network, and best practices guidance for professionals. Training for data end-users: We need to train data end-users in biodiversity and data science at all levels of formal and informal education from primary and secondary education through the existing workforce. This includes developing training and educational materials, creating data portals, and building analyses that are inclusive, accessible, and engage the appropriate community of science educators, data scientists, and biodiversity researchers. Foster a diverse, equitable, inclusive, and accessible and professional workforce: As the DES develops and new tools and resources emerge, we need to be intentional in our commitment to building tools that are accessible and in assuring that access is equitable. This includes establishing best practices to ensure the community providing and accessing data is inclusive and representative of the diverse global community of potential data providers and users. Upfront, we must acknowledge and address issues of historic inequalities and colonial practices and provide appropriate attribution for past and future work while ensuring legal and regulatory compliance. Efforts must include creating transparent linkages among data and the humans that create the data that drives the DES. In this presentation, we will highlight recommendations for building workforce capacity within the DES that are diverse, inclusive, equitable and accessible, take into account the requirements of the biodiversity science community, and that are flexible to meet the needs of an evolving field.more » « less
- 
            Large Language Models (LLMs) are reshaping many aspects of materials science and chemistry research, enabling advances in molecular property prediction, materials design, scientific automation, knowledge extraction, and more. Recent developments demonstrate that the latest class of models are able to integrate structured and unstructured data, assist in hypothesis generation, and streamline research workflows. To explore the frontier of LLM capabilities across the research lifecycle, we review applications of LLMs through 34 total projects developed during the second annual Large Language Model Hackathon for Applications in Materials Science and Chemistry, a global hybrid event. These projects spanned seven key research areas: (1) molecular and material property prediction, (2) molecular and material design, (3) automation and novel interfaces, (4) scientific communication and education, (5) research data management and automation, (6) hypothesis generation and evaluation, and (7) knowledge extraction and reasoning from the scientific literature. Collectively, these applications illustrate how LLMs serve as versatile predictive models, platforms for rapid prototyping of domain-specific tools, and much more. In particular, improvements in both open source and proprietary LLM performance through the addition of reasoning, additional training data, and new techniques have expanded effectiveness, particularly in low-data environments and interdisciplinary research. As LLMs continue to improve, their integration into scientific workflows presents both new opportunities and new challenges, requiring ongoing exploration, continued refinement, and further research to address reliability, interpretability, and reproducibility.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    