skip to main content

Search for: All records

Creators/Authors contains: "Cooke, Nancy J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Ụbụrụ is an executive function computerized rehabilitation application specifically designed for mild Traumatic Brain Injury (mTBI) individuals. Ụbụrụ utilizes serious games to train cognitive flexibility, planning, and organization. This paper explores the rationale and components behind the alpha stage of the application’s development, and its first design iteration. Currently, individuals with a history of mTBI have limited rehabilitation options as a result of lack of knowledge in terms of available services, access, time, or financial and insurance constraints. Due to the invisible nature of mTBIs, perception of injury severity is diminished, individuals are not properly equipped with how to proceed forward with rehabilitation, and awareness of injury can be inadvertently compromised. The intention behind the Ụbụrụ application is to be a computerized cognitive rehabilitation alternative and additive when limitations such as time, finances, or insurance exist.
    Free, publicly-accessible full text available June 26, 2023
  2. Free, publicly-accessible full text available July 1, 2023
  3. This research examines the relationship between anticipatory pushing of information and trust in human– autonomy teaming in a remotely piloted aircraft system - synthetic task environment. Two participants and one AI teammate emulated by a confederate executed a series of missions under routine and degraded conditions. We addressed the following questions: (1) How do anticipatory pushing of information and trust change from human to human and human to autonomous team members across the two sessions? and (2) How is anticipatory pushing of information associated with the trust placed in a teammate across the two sessions? This study demonstrated two main findings: (1) anticipatory pushing of information and trust differed between human-human and human-AI dyads, and (2) anticipatory pushing of information and trust scores increased among human-human dyads under degraded conditions but decreased in human-AI dyads.
  4. Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft system task environment, in which two humans in unique roles were asked to team with a synthetic (i.e., autonomous) pilot agent. We compared verbal and self-reported measures of anthropomorphism with team error handling performance and trust in the synthetic pilot. Results for this study show that trends in verbal anthropomorphism follow the same patterns expected from self-reported measures of anthropomorphism, with respect to fluctuations in trust resulting from autonomy failures.
  5. Objective This work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Background Human–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Method Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Results Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Conclusions Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training thatmore »emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Applications Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.« less