Objective Some studies have demonstrated evidence of a 'bilingual advantage' in domains such as working memory (WM), processing speed (PS), and attention. Less is known about whether similar patterns appear in students who have not yet mastered a second language, and available evidence is conflicting. For example, in Hansen et al. (2016), young students classified as limited English proficient (LEP) outperformed monolingual peers in WM; however, the opposite was found in Castillo et al. (2022). In both studies, the group differences did not persist into adolescence, and other studies at this age show no difference (Low & Siegel, 2005). Research on LEP students and PS is sparse, but most existing studies show no difference between monolinguals and bilinguals or LEP students (Barac et al., 2014). Visual attention (VA) has rarely been studied in this context. However, one study of adults found no difference in visual attention between monolinguals and bilinguals (Bouffier et al., 2020). Here, we compare groups of students who are classified as LEP or not; given prior research and the age and limited second-language proficiency of our subjects, we hypothesized that there would be no difference between groups in WM and PS; the limited research on VA does not allow for a directional hypothesis. Methods Participants were 199 students in from four diverse middle schools in Texas, whose mean age was 12.97 (0.86); 54% were male. Most (80%) students were Hispanic, and 54% were classified by their schools as LEP; 88% received lunch assistance. WM and PS were assessed via the respective indices of the WISC-V (Wechsler, 2014). Attention was evaluated with two versions of a visual attention span (VAS) measure, and two versions of a visual search (VSEARCH) measure. We considered covariates of age, nonverbal reasoning, and phonological processing. Analyses were ANCOVA, with a grouping variable of LEP status. Results Descriptively, on measures where standard scores were available, performances for the whole sample were in the low average range (SS equivalents range 85 to 88). Students designated as LEP had lower WM performances, p < .001. For PS, the reverse was true, with LEP students having stronger PS, p < .001. For attention, results were mixed; performance on VAS was similar between groups, p = .174, whereas for VSEARCH, LEP students had better performances, p < .001. All results held in the context of any combination of covariates. Discussion Results were interesting but differed from expectations. For WM, there was a disadvantage for students classified as LEP, whereas the opposite was true for PS and VSEARCH. The results highlight the need to consider these and similar cognitive individual differences in the context of second language learning, and a need to consider the balance of proficiency across languages. Supporting this possibility, a meta-analysis found that studies of balanced compared to unbalanced bilinguals are more likely to report an advantage in WM and attention (Yurtsever et al., 2023). Overall, this study adds to a limited body of evidence on cognitive processes in students with exposure to, but not mastery of, multiple languages.
more »
« less
"All in the Same Boat": Tradeoffs of Voice Assistant Ownership for Mixed-Visual-Ability Families
A growing body of evidence suggests Voice Assistants (VAs) are highly valued by people with vision impairments (PWVI) and much less so by sighted users. Yet, many are deployed in homes where both PWVI and sighted family members reside. Researchers have yet to study whether VA use and perceived benefits are affected in settings where one person has a visual impairment and others do not. We conducted six in-depth interviews with partners to understand patterns of domestic VA use in mixed-visual-ability families. Although PWVI were more motivated to acquire VAs, used them more frequently, and learned more proactively about their features, partners with vision identified similar benefits and disadvantages of having VAs in their home. We found that the universal usability of VAs both equalizes experience across abilities and presents complex tradeoffs for families-regarding interpersonal relationships, domestic labor, and physical safety-which are weighed against accessibility benefits for PWVI and complicate the decision to fully integrate VAs in the home.
more »
« less
- Award ID(s):
- 1850251
- PAR ID:
- 10156140
- Date Published:
- Journal Name:
- CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
- Page Range / eLocation ID:
- 1–14
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
BackgroundMonitoring technologies are used to collect a range of information, such as one’s location out of the home or movement within the home, and transmit that information to caregivers to support aging in place. Their surveilling nature, however, poses ethical dilemmas and can be experienced as intrusive to people living with Alzheimer disease (AD) and AD-related dementias. These challenges are compounded when older adults are not engaged in decision-making about how they are monitored. Dissemination of these technologies is outpacing our understanding of how to communicate their functions, risks, and benefits to families and older adults. To date, there are no tools to help families understand the functions of monitoring technologies or guide them in balancing their perceived need for ongoing surveillance and the older adult’s dignity and wishes. ObjectiveWe designed, developed, and piloted a communication and education tool in the form of a web application called Let’s Talk Tech to support family decision-making about diverse technologies used in dementia home care. The knowledge base about how to design online interventions for people living with mild dementia is still in development, and dyadic interventions used in dementia care remain rare. We describe the intervention’s motivation and development process, and the feasibility of using this self-administered web application intervention in a pilot sample of people living with mild AD and their family care partners. MethodsWe surveyed 29 mild AD dementia care dyads living together before and after they completed the web application intervention and interviewed each dyad about their experiences with it. We report postintervention measures of feasibility (recruitment, enrollment, and retention) and acceptability (satisfaction, quality, and usability). Descriptive statistics were calculated for survey items, and thematic analysis was used with interview transcripts to illuminate participants’ experiences and recommendations to improve the intervention. ResultsThe study enrolled 33 people living with AD and their care partners, and 29 (88%) dyads completed the study (all but one were spousal dyads). Participants were asked to complete 4 technology modules, and all completed them. The majority of participants rated the tool as having the right length (>90%), having the right amount of information (>84%), being very clearly worded (>74%), and presenting information in a balanced way (>90%). Most felt the tool was easy to use and helpful, and would likely recommend it to others. ConclusionsThis study demonstrated that our intervention to educate and facilitate conversation and documentation of preferences is preliminarily feasible and acceptable to mild AD care dyads. Effectively involving older adults in these decisions and informing care partners of their preferences could enable families to avoid conflicts or risks associated with uninformed or disempowered use and to personalize use so both members of the dyad can experience benefits.more » « less
-
Despite significant vision loss, humans can still recognize various emotional stimuli via a sense of hearing and express diverse emotional responses, which can be sorted into two dimensions, arousal and valence. Yet, many research studies have been focusing on sighted people, leading to lack of knowledge about emotion perception mechanisms of people with visual impairment. This study aims at advancing knowledge of the degree to which people with visual impairment perceive various emotions – high/low arousal and positive/negative emotions. A total of 30 individuals with visual impairment participated in interviews where they listened to stories of people who became visually impaired, encountered and overcame various challenges, and they were instructed to share their emotions. Participants perceived different kinds and intensities of emotions, depending on their demographic variables such as living alone, loneliness, onset of visual impairment, visual acuity, race/ethnicity, and employment status. The advanced knowledge of emotion perceptions in people with visual impairment is anticipated to contribute toward better designing social supports that can adequately accommodate those with visual impairment.more » « less
-
BACKGROUND Facial expressions are critical for conveying emotions and facilitating social interaction. Yet, little is known about how accurately sighted individuals recognize emotions facially expressed by people with visual impairments in online communication settings. OBJECTIVE This study aimed to investigate sighted individuals’ ability to understand facial expressions of six basic emotions in people with visual impairments during Zoom calls. It also aimed to examine whether education on facial expressions specific to people with visual impairments would improve emotion recognition accuracy. METHODS Sighted participants viewed video clips of individuals with visual impairments displaying facial expressions. They then identified the emotions displayed. Next, they received an educational session on facial expressions specific to people with visual impairments, addressing unique characteristics and potential misinterpretations. After education, participants viewed another set of video clips and again identified the emotions displayed. RESULTS Before education, participants frequently misidentified emotions. After education, their accuracy in recognizing emotions improved significantly. CONCLUSIONS This study provides evidence that education on facial expressions of people with visual impairments can significantly enhance sighted individuals’ ability to accurately recognize emotions in online settings. This improved accuracy has the potential to foster more inclusive and effective online interactions between people with and without visual disabilities.more » « less
-
Reorientation enables navigators to regain their bearings after becoming lost. Disoriented individuals primarily reorient themselves using the geometry of a layout, even when other informative cues, such as landmarks, are present. Yet the specific strategies that animals use to determine geometry are unclear. Moreover, because vision allows subjects to rapidly form precise representations of objects and background, it is unknown whether it has a deterministic role in the use of geometry. In this study, we tested sighted and congenitally blind mice ( Ns = 8–11) in various settings in which global shape parameters were manipulated. Results indicated that the navigational affordances of the context—the traversable space—promote sampling of boundaries, which determines the effective use of geometric strategies in both sighted and blind mice. However, blind animals can also effectively reorient themselves using 3D edges by extensively patrolling the borders, even when the traversable space is not limited by these boundaries.more » « less
An official website of the United States government

