Designing technologies that support the mutual cybersecurity and autonomy of older adults facing cognitive challenges requires close collaboration of partners. As part of research to design a Safety Setting application for older adults with memory loss or mild cognitive impairment (MCI), we use a scenario-based participatory design. Our study builds on previous findings that couples’ approach to memory loss was characterized by a desire for flexibility and choice, and an embrace of role uncertainty. We find that couples don't want a system that fundamentally alters their relationship and are looking to maximize self-surveillance competence and minimize loss of autonomy for their partners. All desire Safety Settings to maintain their mutual safety rather than designating one partner as the target of oversight. Couples are open to more rigorous surveillance if they have control over what types of activities trigger various levels of oversight.
“Citizens Too”: Safety Setting Collaboration Among Older Adults with Memory Concerns
Designing technologies that support the cybersecurity of older adults with memory concerns involves wrestling with an uncomfortable paradox between surveillance and independence and the close collaboration of couples. This research captures the interactions between older adult couples where one or both have memory concerns—a primary feature of cognitive decline—as they make decisions on how to safeguard their online activities using a Safety Setting probe we designed, and over the course of several informal interviews and a diary study. Throughout, couples demonstrated a collaborative mentality to which we apply a frame of citizenship in opensource collaboration, specifically (a) histories of participation , (b) lower barriers to participation, and (c) maintaining ongoing contribution. In this metaphor of collaborative enterprise, one partner (or member of the couple) may be the service provider and the other may be the participant, but at varying moments, they may switch roles while still maintaining a collaborative focus on preserving shared assets and freedom on the internet. We conclude with a discussion of what this service provider-contributor mentality means for empowerment through citizenship, and implications for vulnerable populations’ cybersecurity.
- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- ACM Transactions on Computer-Human Interaction
- Page Range or eLocation-ID:
- 1 to 32
- Sponsoring Org:
- National Science Foundation
More Like this
This paper investigates qualitatively what happens when couples facing a spectrum of options must arrive at consensual choices together. We conducted an observational study of couples experiencing memory concerns (one or both) while the partners engaged in the process of reviewing and selecting “Safety Setting” options for online activities. Couples’ choices tended to be influenced by a desire to secure shared assets through mutual surveillance and a desire to preserve autonomy by granting freedom in social and personal activities. The availability of choice suits the uneven and unpredictable process of memory loss and couples’ acknowledged uncertainty about its trajectory, leading them to anticipate changing Safety Settings as one or both of them experience further cognitive decline. Reflecting these three decision drivers, we conclude with implications for a design system that offers flexibility and adaptability in variety of settings, accommodates the uncertainty of memory loss, preserves autonomy, and supports collaborative management of shared assets.
Insights from the First Two Years of a Project Partnering Middle School Teachers with Industry to Bring Engineering to the Science ClassroomBarriers to broadening participation in engineering to rural and Appalachian youth include misalignment with family and community values, lack of opportunities, and community misperceptions of engineering. While single interventions are unlikely to stimulate change in these areas, more sustainable interventions that are co-designed with local relevance appear promising. Through our NSF ITEST project, we test the waters of this intervention model through partnership with school systems and engineering industry to implement a series of engineering-themed, standards-aligned lessons for the middle school science classroom. Our mixed methods approach includes collection of interview and survey data from administrators, teachers, engineers, and university affiliates as well as observation and student data from the classroom. We have utilized theory from learning science and organizational collaboration to structure and inform our analysis and explore the impact of our project. The research is guided by the following questions: RQ 1: How do participants conceptualize engineering careers? How and why do such perceptions shift throughout the project? RQ 2: What elements of the targeted intervention affect student motivation towards engineering careers specifically with regard to developing competencies and ability beliefs regarding engineering? RQ 3: How can strategic collaboration between K12 and industry promote a shift in teacher’smore »
Artificial intelligence (AI) and cybersecurity are in-demand skills, but little is known about what factors influence computer science (CS) undergraduate students' decisions on whether to specialize in AI or cybersecurity and how these factors may differ between populations. In this study, we interviewed undergraduate CS majors about their perceptions of AI and cybersecurity. Qualitative analyses of these interviews show that students have narrow beliefs about what kind of work AI and cybersecurity entail, the kinds of people who work in these fields, and the potential societal impact AI and cybersecurity may have. Specifically, students tended to believe that all work in AI requires math and training models, while cybersecurity consists of low-level programming; that innately smart people work in both fields; that working in AI comes with ethical concerns; and that cybersecurity skills are important in contemporary society. Some of these perceptions reinforce existing stereotypes about computing and may disproportionately affect the participation of students from groups historically underrepresented in computing. Our key contribution is identifying beliefs that students expressed about AI and cybersecurity that may affect their interest in pursuing the two fields and may, therefore, inform efforts to expand students' views of AI and cybersecurity. Expanding student perceptionsmore »
Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participants' trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice.