skip to main content


Title: A Synthesis of the Dibble et al. Controlled Experiments into the Mechanics of Lithic Production
Abstract

Archaeologists have explored a wide range of topics regarding archaeological stone tools and their connection to past human lifeways through experimentation. Controlled experimentation systematically quantifies the empirical relationships among different flaking variables under a controlled and reproducible setting. This approach offers a platform to generate and test hypotheses about the technological decisions of past knappers from the perspective of basic flaking mechanics. Over the past decade, Harold Dibble and colleagues conducted a set of controlled flaking experiments to better understand flake variability using mechanical flaking apparatuses and standardized cores. Results of their studies underscore the dominant impact of exterior platform angle and platform depth on flake size and shape and have led to the synthesis of a flake formation model, namely the EPA-PD model. However, the results also illustrate the complexity of the flake formation process through the influence of other parameters such as core surface morphology and force application. Here we review the work of Dibble and colleagues on controlled flaking experiments by summarizing their findings to date. Our goal is to synthesize what was learned about flake variability from these controlled experiments to better understand the flake formation process. With this paper, we are including all of the data produced by these prior experiments and an explanation of the data in the Supplementary Information.

 
more » « less
NSF-PAR ID:
10378125
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Journal of Archaeological Method and Theory
Volume:
30
Issue:
4
ISSN:
1072-5369
Format(s):
Medium: X Size: p. 1284-1325
Size(s):
["p. 1284-1325"]
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  2. Technical advances in artificial manipulation of neural activity have precipitated a surge in studying the causal contribution of brain circuits to cognition and behavior. However, complexities of neural circuits challenge interpretation of experimental results, necessitating new theoretical frameworks for reasoning about causal effects. Here, we take a step in this direction, through the lens of recurrent neural networks trained to perform perceptual decisions. We show that understanding the dynamical system structure that underlies network solutions provides a precise account for the magnitude of behavioral effects due to perturbations. Our framework explains past empirical observations by clarifying the most sensitive features of behavior, and how complex circuits compensate and adapt to perturbations. In the process, we also identify strategies that can improve the interpretability of inactivation experiments.

    Significance StatementNeuroscientists heavily rely on artificial perturbation of the neural activity to understand the function of brain circuits. Current interpretations of experimental results often follow a simple logic, that the magnitude of a behavioral effect following a perturbation indicates the degree of involvement of the perturbed circuit in the behavior. We model a variety of neural networks with controlled levels of com­plexity, robustness, and plasticity, showing that perturbation experiments could yield counter-intuitive results when networks are complex enough-to allow unperturbed pathways to compensate for the per­turbed neurons-or plastic enough-to allow continued learning from feedback during perturbations. To rein in these complexities we develop a Functional Integrity Index that captures alterations in network computations and predicts disruptions of behavior with the perturbation.

     
    more » « less
  3. Abstract

    Web-based experiments are gaining momentum in motor learning research because of the desire to increase statistical power, decrease overhead for human participant experiments, and utilize a more demographically inclusive sample population. However, there is a vital need to understand the general feasibility and considerations necessary to shift tightly controlled human participant experiments to an online setting. We developed and deployed an online experimental platform modeled after established in-laboratory visuomotor rotation experiments to serve as a case study examining remotely collected data quality for an 80-min experiment. Current online motor learning experiments have thus far not exceeded 60 min, and current online crowdsourced studies have a median duration of approximately 10 min. Thus, the impact of a longer-duration, web-based experiment is unknown. We used our online platform to evaluate perturbation-driven motor adaptation behavior under three rotation sizes (±10°, ±35°, and ±65°) and two sensory uncertainty conditions. We hypothesized that our results would follow predictions by the relevance estimation hypothesis. Remote execution allowed us to double (n = 49) the typical participant population size from similar studies. Subsequently, we performed an in-depth examination of data quality by analyzing single-trial data quality, participant variability, and potential temporal effects across trials. Results replicated in-laboratory findings and provided insight on the effect of induced sensory uncertainty on the relevance estimation hypothesis. Our experiment also highlighted several specific challenges associated with online data collection including potentially smaller effect sizes, higher data variability, and lower recommended experiment duration thresholds. Overall, online paradigms present both opportunities and challenges for future motor learning research.

     
    more » « less
  4. Despite increased calls for the need for more diverse engineers and significant efforts to “move the needle,” the composition of students, especially women, earning bachelor’s degrees in engineering has not significantly changed over the past three decades. Prior research by Klotz and colleagues (2014) showed that sustainability as a topic in engineering education is a potentially positive way to increase women’s interest in STEM at the transition from high school to college. Additionally, sustainability has increasingly become a more prevalent topic in engineering as the need for global solutions that address the environmental, social, and economic aspects of sustainability have become more pressing. However, few studies have examined students’ sustainability related career for upper-level engineering students. This time point is a critical one as students are transitioning from college to industry or other careers where they may be positioned to solve some of these pressing problems. In this work, we answer the question, “What differences exist between men and women’s attitudes about sustainability in upper-level engineering courses?” in order to better understand how sustainability topics may promote women’s interest in and desire to address these needs in their future careers. We used pilot data from the CLIMATE survey given to 228 junior and senior civil, environmental, and mechanical engineering students at a large East Coast research institution. This survey included questions about students’ career goals, college experiences, beliefs about engineering, and demographic information. The students surveyed included 62 third-year students, 96 fourth-year students, 29 fifth-year students, and one sixth-year student. In order to compare our results of upper-level students’ attitudes about sustainability, we asked the same questions as the previous study focused on first-year engineering students, “Which of these topics, if any, do you hope to directly address in your career?” The list of topics included energy (supply or demand), climate change, environmental degradation, water supply, terrorism and war, opportunities for future generations, food availability, disease, poverty and distribution of resources, and opportunities for women and/or minorities. As the answer to this question was binary, either “Yes,” or “No,” Pearson’s Chi-squared test with Yates’ continuity correction was performed on each topic for this question, comparing men and women’s answers. We found that women are significantly more likely to want to address water supply, food availability, and opportunities for woman and/or minorities in their careers than their male peers. Conversely, men were significantly more likely to want to address energy and terrorism and war in their careers than their female peers. Our results begin to help us understand the particular differences that men and women, even far along in their undergraduate engineering careers, may have in their desire to address certain sustainability outcomes in their careers. This work begins to let us understand certain topics and pathways that may support women in engineering as well as provides comparisons to prior work on early career undergraduate students. Our future work will include looking at particular student experiences in and out of the classroom to understand how these sustainability outcome expectations develop. 
    more » « less
  5. Abstract Background

    Many institutional and departmentally focused change efforts have sought to improve teaching in STEM through the promotion of evidence-based instructional practices (EBIPs). Even with these efforts, EBIPs have not become the predominant mode of teaching in many STEM departments. To better understand institutional change efforts and the barriers to EBIP implementation, we developed the Cooperative Adoption Factors Instrument (CAFI) to probe faculty member characteristics beyond demographic attributes at the individual level. The CAFI probes multiple constructs related to institutional change including perceptions of the degree of mutual advantage of taking an action (strategic complements), trust and interconnectedness among colleagues (interdependence), and institutional attitudes toward teaching (climate).

    Results

    From data collected across five STEM fields at three large public research universities, we show that the CAFI has evidence of internal structure validity based on exploratory and confirmatory factor analysis. The scales have low correlations with each other and show significant variation among our sampled universities as demonstrated by ANOVA. We further demonstrate a relationship between the strategic complements and climate factors with EBIP adoption through use of a regression analysis. In addition to these factors, we also find that indegree, a measure of opinion leadership, correlates with EBIP adoption.

    Conclusions

    The CAFI uses the CACAO model of change to link the intended outcome of EBIP adoption with perception of EBIPs as mutually reinforcing (strategic complements), perception of faculty having their fates intertwined (interdependence), and perception of institutional readiness for change (climate). Our work has established that the CAFI is sensitive enough to pick up on differences between three relatively similar institutions and captures significant relationships with EBIP adoption. Our results suggest that the CAFI is likely to be a suitable tool to probe institutional change efforts, both for change agents who wish to characterize the local conditions on their respective campuses to support effective planning for a change initiative and for researchers who seek to follow the progression of a change initiative. While these initial findings are very promising, we also recommend that CAFI be administered in different types of institutions to examine the degree to which the observed relationships hold true across contexts.

     
    more » « less