skip to main content


Title: CubeSat Radiation Hardness Assurance Beyond Total Dose: Evaluating Single Event Effects
Radiation poses known and serious risks to smallsat survivability and mission duration, with effects falling into two categories: long-term total ionizing dose (TID) and instantaneous single event effects (SEE). Although literature exists on the topic of addressing TID in smallsats, few resources exist for addressing SEEs. Many varieties of SEEs exist, such as bit upsets and latch ups, which can occur in any electronic component containing active semiconductors (such as transistors). SEE consequences range from benign to destructive, so mission reliability can be enhanced by implementing fault protection strategies based on predicted SEE rates. Unfortunately, SEE rates are most reliably estimated through experimental testing that is often too costly for smallsat-scale missions. Prior test data published by larger programs exist, but may be sparse or incompatible with the environment of a particular mission. Despite these limitations, a process may be followed to gain insights and make informed design decisions for smallsats in the absence of hardware testing capabilities or similar test data. This process is: (1) Define the radiation environment; (2) identify the most critical and/or susceptible components on a spacecraft; (3) perform a search for compatible prior test data and/or component class data; (4) evaluate mission-specific SEE rates from available data; (5) study the rates alongside the mission requirements to identify high-risk areas of potential mitigation. The methodology developed in this work is based on the multi-institutional, National Science Foundation (NSF) Space Weather Atmospheric Reconfigurable Multiscale Experiment (SWARM-EX) mission. The steps taken during SWARM-EX’s radiation analysis alongside the detailed methodology serve as a case study for how these techniques can be applied to increasing the reliability of a university-scale smallsat mission.  more » « less
Award ID(s):
1936665
NSF-PAR ID:
10354206
Author(s) / Creator(s):
Date Published:
Journal Name:
Small Satellite Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  2. Weather, winds, thermals, and turbulence pose an ever-present challenge to small UAS. These challenges become magnified in rough terrain and especially within urban canyons. As the industry moves towards Beyond Visual Line of Sight (BVLOS) and fully autonomous operations, resilience to weather perturbations will be key. As the human decision-maker is removed from the in-situ environment, producing control systems that are robust will be paramount to the preservation of any Airspace System. Safety requirements and regulations require quantifiable performance metrics to guarantee a safe aerial environment with ever- increasing traffic. In this regards, the effect of wind and weather disturbances on a UAS and its ability to reject these disturbances present some unique concerns. Currently, drone manufacturers and operators rely on outdoor testing during windy days (or in windy locations) and onboard logging to evaluate and improve the flight worthiness, reliability and perturbation rejection capability of their vehicles. Waiting for the desired weather or travelling to a windier location is cost- and time-inefficient. Moreover, the conditions found on outdoor test sites are difficult to quantify and repeatability is non-existent. To address this situation, a novel testing methodology is proposed, combining artificial wind generation thanks to a multi-fan array wind generator (windshaper), coherent GNSS signal generation and accurate tracking of the test subject thanks to motion capture cameras. In this environment, the drone being tested can fly freely, follow missions and experience wind perturbations whilst staying in a modest indoor volume. By coordinating the windshaper, the motion tracking feedback and the position emulated by the GNSS signal generator with the drone’s mission profile, it was demonstrated that outdoor flight conditions can be reliably recreated in a controlled and repeatable environment. Specifically, thanks to real-time update of the position simulated by the GNSS signal generator, it was possible to demonstrate that the drone’s perception of the situation is similar to a corresponding mission being executed outdoor. In this work, the drone was subjected to three distinct flight cases: (1) hover in 2 m s−1 wind, (2) forward flight at 2 m s−1 without wind and (3) forward flight at 2 m s−1 with 2 m s−1 headwind. In each case, it could be demonstrated that by using indoor GNSS signal simulation and wind generation, the drone displays the characteristics of a 20 m move forward, while actually staying stationary in the test volume, within ±1 m. Further development of this methodology opens the door for fully integrated hardware-in- the-loop simulation of drone flight operations. 
    more » « less
  3. Mission-critical wireless networks are being upgraded to 4G long-term evolution (LTE). As opposed to capacity, these networks require very high reliability and security as well as easy deployment and operation in the field. Wireless communication systems have been vulnerable to jamming, spoofing and other radio frequency attacks since the early days of analog systems. Although wireless systems have evolved, important security and reliability concerns still exist. This paper presents our methodology and results for testing 4G LTE operating in harsh signaling environments. We use software-defined radio technology and open-source software to develop a fully configurable protocol-aware interference waveform. We define several test cases that target the entire LTE signal or part of it to evaluate the performance of a mission-critical production LTE system. Our experimental results show that synchronization signal interference in LTE causes significant throughput degradation at low interference power. By dynamically evaluating the performance measurement counters, the k-nearest neighbor classification method can detect the specific RF signaling attack to aid in effective mitigation. 
    more » « less
  4. Distributed space systems, and specifically spacecraft formations, have been identified as a new paradigm for addressing important science questions. However, when it comes to verifying and validating these systems before launch, there is the added challenge of figuring out how to test the formation’s holistic operations on the ground since a full end-to-end mission simulation is likely infeasible due to the need for costly testing infrastructure/facilities. Building on established methods for single-spacecraft testing, this paper presents a two-phase testing methodology that can be applied to precision formation flying missions with budget, timeframe, and resource constraints. First, a testing plan with unique considerations to address the coordinated and coupled nature of precision formation flight is devised to obtain high system confidence on the ground, and second, the formation’s holistic behavior is refined on orbit during the mission’s in-space commissioning. This approach structures the pre-launch testing to make efficient use of the limited test infrastructure on hand and leverages a sequential configuration process combined with built-in operational flexibility on orbit to safely finish characterizing the formation’s performance so that it can meet mission requirements 
    more » « less
  5. Abstract Background

    This study examined the relations between students' expectancies for success and a physiological component of test anxiety, salivary cortisol, during an authentic testing setting.

    Aims

    The aim of the study was to better understand the connection between shifts in students' control appraisals and changes in the physiological component of test anxiety.

    Sample

    The study comprised 45 undergraduate engineering majors in the United States.

    Methods

    Survey data concerning students' expectancy for success and saliva samples were taken before, during and after the practice midterm examination prior to their actual in‐class examination.

    Results

    Students' expectancy for success declined during the examination while cortisol levels declined from the beginning to middle of the examination and began to increase again as a function of time. Although students' initial levels of expectancy for success and cortisol were not correlated, there was a negative relation between change in cortisol and change in expectancy for success.

    Conclusions

    Our study demonstrates a relation between salivary cortisol, a physiological component of test anxiety and students' expectancy for success in an authentic testing context. Most students saw a decrease in cortisol during the examination, suggesting anticipatory anxiety prior to the test and a return to homeostasis as the examination progressed. Some students, however, did not see a declination in cortisol, suggesting they may not have recovered from pre‐examination anxiety. The negative relation between change in cortisol and expectancy for success suggests that students who had the greatest decrease in expectancy for success saw the smallest recovery in cortisol.

     
    more » « less