skip to main content


Title: Challenges and Lessons Deploying a Physical System for Resource Exchange in Local Communities
This interactive poster will discuss challenges and lessons learned designing and deploying ShareBox, a hardware-based system that enables people to share physical resources within local communities. Our goal in sharing the insights and struggles we encountered creating ShareBox is to help other researchers working on similar platforms to avoid the pitfalls that impacted our research.  more » « less
Award ID(s):
1665169
NSF-PAR ID:
10171861
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
CSCW '19: Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Our NSF-funded ITEST project focuses on the collaborative design, implementation, and study of recurrent hands-on engineering activities with middle school youth in three rural communities in or near Appalachia. To achieve this aim, our team of faculty and graduate students partner with school educators and industry experts embedded in students’ local communities to collectively develop curriculum to aim at teacher-identified science standard and facilitate regular in-class interventions throughout the academic year. Leveraging local expertise is especially critical in this project because family pressures, cultural milieu, and preference for local, stable jobs play considerable roles in how Appalachian youth choose possible careers. Our partner communities have voluntarily opted to participate with us in a shared implementation-research program and as our project unfolds we are responsive to community-identified needs and preferences while maintaining the research program’s integrity. Our primary focus has been working to incorporate hands-on activities into science classrooms aimed at state science standards in recognition of the demands placed on teachers to align classroom time with state standards and associated standardized achievement tests. Our focus on serving diverse communities while being attentive to relevant research such as the preference for local, stable jobs attention to cultural relevance led us to reach out to advanced manufacturing facilities based in the target communities in order to enhance the connection students and teachers feel to local engineers. Each manufacturer has committed to designating several employees (engineers) to co-facilitate interventions six times each academic year. Launching our project has involved coordination across stakeholder groups to understand distinct values, goals, strengths and needs. In the first academic year, we are working with 9 different 6th grade science teachers across 7 schools in 3 counties. Co-facilitating in the classroom are representatives from our project team, graduate student volunteers from across the college of engineering, and volunteering engineers from our three industry partners. Developing this multi-stakeholder partnership has involved discussions and approvals across both school systems (e.g., superintendents, STEM coordinators, teachers) and our industry partners (e.g., managers, HR staff, volunteering engineers). The aim of this engagement-in-practice paper is to explore our lessons learned in navigating the day-to-day challenges of (1) developing and facilitating curriculum at the intersection of science standards, hands-on activities, cultural relevancy, and engineering thinking, (2) collaborating with volunteers from our industry partners and within our own college of engineering in order to deliver content in every science class of our 9 6th grade teachers one full school day/month, and (3) adapting to emergent needs that arise due to school and division differences (e.g., logistics of scheduling and curriculum pacing), community differences across our three counties (e.g., available resources in schools), and partner constraints. 
    more » « less
  2. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  3. Obeid, I. ; Selesnick, I. (Ed.)
    The Neural Engineering Data Consortium at Temple University has been providing key data resources to support the development of deep learning technology for electroencephalography (EEG) applications [1-4] since 2012. We currently have over 1,700 subscribers to our resources and have been providing data, software and documentation from our web site [5] since 2012. In this poster, we introduce additions to our resources that have been developed within the past year to facilitate software development and big data machine learning research. Major resources released in 2019 include: ● Data: The most current release of our open source EEG data is v1.2.0 of TUH EEG and includes the addition of 3,874 sessions and 1,960 patients from mid-2015 through 2016. ● Software: We have recently released a package, PyStream, that demonstrates how to correctly read an EDF file and access samples of the signal. This software demonstrates how to properly decode channels based on their labels and how to implement montages. Most existing open source packages to read EDF files do not directly address the problem of channel labels [6]. ● Documentation: We have released two documents that describe our file formats and data representations: (1) electrodes and channels [6]: describes how to map channel labels to physical locations of the electrodes, and includes a description of every channel label appearing in the corpus; (2) annotation standards [7]: describes our annotation file format and how to decode the data structures used to represent the annotations. Additional significant updates to our resources include: ● NEDC TUH EEG Seizure (v1.6.0): This release includes the expansion of the training dataset from 4,597 files to 4,702. Calibration sequences have been manually annotated and added to our existing documentation. Numerous corrections were made to existing annotations based on user feedback. ● IBM TUSZ Pre-Processed Data (v1.0.0): A preprocessed version of the TUH Seizure Detection Corpus using two methods [8], both of which use an FFT sliding window approach (STFT). In the first method, FFT log magnitudes are used. In the second method, the FFT values are normalized across frequency buckets and correlation coefficients are calculated. The eigenvalues are calculated from this correlation matrix. The eigenvalues and correlation matrix's upper triangle are used to generate feature. ● NEDC TUH EEG Artifact Corpus (v1.0.0): This corpus was developed to support modeling of non-seizure signals for problems such as seizure detection. We have been using the data to build better background models. Five artifact events have been labeled: (1) eye movements (EYEM), (2) chewing (CHEW), (3) shivering (SHIV), (4) electrode pop, electrostatic artifacts, and lead artifacts (ELPP), and (5) muscle artifacts (MUSC). The data is cross-referenced to TUH EEG v1.1.0 so you can match patient numbers, sessions, etc. ● NEDC Eval EEG (v1.3.0): In this release of our standardized scoring software, the False Positive Rate (FPR) definition of the Time-Aligned Event Scoring (TAES) metric has been updated [9]. The standard definition is the number of false positives divided by the number of false positives plus the number of true negatives: #FP / (#FP + #TN). We also recently introduced the ability to download our data from an anonymous rsync server. The rsync command [10] effectively synchronizes both a remote directory and a local directory and copies the selected folder from the server to the desktop. It is available as part of most, if not all, Linux and Mac distributions (unfortunately, there is not an acceptable port of this command for Windows). To use the rsync command to download the content from our website, both a username and password are needed. An automated registration process on our website grants both. An example of a typical rsync command to access our data on our website is: rsync -auxv nedc_tuh_eeg@www.isip.piconepress.com:~/data/tuh_eeg/ Rsync is a more robust option for downloading data. We have also experimented with Google Drive and Dropbox, but these types of technology are not suitable for such large amounts of data. All of the resources described in this poster are open source and freely available at https://www.isip.piconepress.com/projects/tuh_eeg/downloads/. We will demonstrate how to access and utilize these resources during the poster presentation and collect community feedback on the most needed additions to enable significant advances in machine learning performance. 
    more » « less
  4. Alongside the continued evolution of the field of engineering education, the number of early career faculty members who identify as members of the discipline continues to increase. This growth has resulted in a new wave of roles, titles, and experiences for engineering education researchers, many of which have yet to be explored and understood. To address this gap, our research team is investigating the ways in which early career engineering education faculty are able to achieve impact in their current roles. Our aim is to provide insights on the ways in which these researchers can have new and evolving forms of impact within the engineering education field. The work presented herein explores the transition experiences of our research team, consisting of six early-career faculty, and the ways in which we experience agency at the individual, institutional, and field and societal levels. Doing so is necessary to consider the diverse backgrounds, visions, goals, plans, and commitments of early career faculty members. Guided by two qualitative research methodologies: collaborative inquiry and collaborative autoethnography, we are able to explore our lived experiences and respective academic cultures through iterative cycles of reflection and action towards agency. The poster presented will provide an update on our NSF RFE work through Phase 1 of our two phase investigation. Thus far the investigation has involved analysis of our reflections from the first two years of our faculty roles to identify critical incidents within the early career transition and development of our identities as faculty members. Additionally, we have collected reflective data to understand each of our goals, relevant aspects of our identity and desired areas of impact. Analysis of the transition has resulted in new insights on the aspects of transition, focusing on types of impactful situations, and the supports and strategies that are utilized. Analysis has begun to explore the role of identity on each members desired areas of impact and their ability to have impact. Data will also be presented from a survey of near peers, providing insight into the ways in which each early career engineering education faculty believe they are able to and desire to have impact in their current position. The collective analysis around the transition into a faculty role, strategic actions of new faculty, desired impact areas, and faculty identity will play a role in the development of our conceptual model of early career faculty agency. Additionally, this analysis provides the groundwork for phase two of our study, where we will seek to place the experiences of our group within the context of the larger community of early career engineering education faculty. 
    more » « less
  5. Alongside the continued evolution of the field of engineering education, the number of early career faculty members who identify as members of the discipline continues to increase. This growth has resulted in a new wave of roles, titles, and experiences for engineering education researchers, many of which have yet to be explored and understood. To address this gap, our research team is investigating the ways in which early career engineering education faculty are able to achieve impact in their current roles. Our aim is to provide insights on the ways in which these researchers can have new and evolving forms of impact within the engineering education field. The work presented herein explores the transition experiences of our research team, consisting of six early-career faculty, and the ways in which we experience agency at the individual, institutional, and field and societal levels. Doing so is necessary to consider the diverse backgrounds, visions, goals, plans, and commitments of early career faculty members. Guided by two qualitative research methodologies: collaborative inquiry and collaborative autoethnography, we are able to explore our lived experiences and respective academic cultures through iterative cycles of reflection and action towards agency. The poster presented will provide an update on our NSF RFE work through Phase 1 of our two phase investigation. Thus far the investigation has involved analysis of our reflections from the first two years of our faculty roles to identify critical incidents within the early career transition and development of our identities as faculty members. Additionally, we have collected reflective data to understand each of our goals, relevant aspects of our identity and desired areas of impact. Analysis of the transition has resulted in new insights on the aspects of transition, focusing on types of impactful situations, and the supports and strategies that are utilized. Analysis has begun to explore the role of identity on each members desired areas of impact and their ability to have impact. Data will also be presented from a survey of near peers, providing insight into the ways in which each early career engineering education faculty believe they are able to and desire to have impact in their current position. The collective analysis around the transition into a faculty role, strategic actions of new faculty, desired impact areas, and faculty identity will play a role in the development of our conceptual model of early career faculty agency. Additionally, this analysis provides the groundwork for phase two of our study, where we will seek to place the experiences of our group within the context of the larger community of early career engineering education faculty. 
    more » « less