skip to main content


Search for: All records

Award ID contains: 1660894

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Little quantitative research has explored which clinician skills and behaviors facilitate communication. Mutual understanding is especially challenging when patients have limited health literacy (HL). Two strategies hypothesized to improve communication include matching the complexity of language to patients’ HL (“universal tailoring”); or always using simple language (“universal precautions”). Through computational linguistic analysis of 237,126 email exchanges between dyads of 1094 physicians and 4331 English-speaking patients, we assessed matching (concordance/discordance) between physicians’ linguistic complexity and patients’ HL, and classified physicians’ communication strategies. Among low HL patients, discordance was associated with poor understanding ( P = 0.046). Physicians’ “universal tailoring” strategy was associated with better understanding for all patients ( P = 0.01), while “universal precautions” was not. There was an interaction between concordance and communication strategy ( P = 0.021): The combination of dyadic concordance and “universal tailoring” eliminated HL-related disparities. Physicians’ ability to adapt communication to match their patients’ HL promotes shared understanding and equity. The ‘Precision Medicine’ construct should be expanded to include the domain of ‘Precision Communication.’ 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
  4. Positive interpersonal relationships require shared understanding along with a sense of rapport. A key facet of rapport is mirroring and convergence of facial expression and body language, known as nonverbal synchrony. We examined nonverbal synchrony in a study of 29 heterosexual romantic couples, in which audio, video, and bracelet accelerometer were recorded during three conversations. We extracted facial expression, body movement, and acoustic-prosodic features to train neural network models that predicted the nonverbal behaviors of one partner from those of the other. Recurrent models (LSTMs) outperformed feed-forward neural networks and other chance baselines. The models learned behaviors encompassing facial responses, speech-related facial movements, and head movement. However, they did not capture fleeting or periodic behaviors, such as nodding, head turning, and hand gestures. Notably, a preliminary analysis of clinical measures showed greater association with our model outputs than correlation of raw signals. We discuss potential uses of these generative models as a research tool to complement current analytical methods along with real-world applications (e.g., as a tool in therapy). 
    more » « less