skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Title: What Did They Learn? Objective Assessment Tools Show Mixed Effects of Training on Science Communication Behaviors
There is widespread agreement about the need to assess the success of programs training scientists to communicate more effectively with non-professional audiences. However, there is little agreement about how that should be done. What do we mean when we talk about “effective communication”? What should we measure? How should we measure it? Evaluation of communication training programs often incorporates the views of students or trainers themselves, although this is widely understood to bias the assessment. We recently completed a 3-year experiment to use audiences of non-scientists to evaluate the effect of training on STEM (Science, Technology, Engineering and Math) graduate students’ communication ability. Overall, audiences rated STEM grad students’ communication performance no better after training than before, as we reported in Rubega et al. 2018. However, audience ratings do not reveal whether training changed specific trainee communication behaviors (e.g., jargon use, narrative techniques) even if too little to affect trainees’ overall success. Here we measure trainee communication behavior directly, using multiple textual analysis tools and analysis of trainees’ body language during videotaped talks. We found that student use of jargon declined after training but that use of narrative techniques did not increase. Flesch Reading Ease and Flesch-Kincaid Grade Level scores, used as indicators of complexity of sentences and word choice, were no different after instruction. Trainees’ movement of hands and hesitancy during talks was correlated negatively with audience ratings of credibility and clarity; smiling, on the other hand, was correlated with improvement in credibility, clarity and engagement scores given by audience members. We show that objective tools can be used to measure the success of communication training programs, that non-verbal cues are associated with audience judgments, and that an intensive communication course does change some, if not all, communication behaviors.  more » « less
Award ID(s):
2022036
NSF-PAR ID:
10342400
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Frontiers in Communication
Volume:
6
ISSN:
2297-900X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    As the science community has recognized the vital role of communicating to the public, science communication training has proliferated. The development of rigorous, comparable approaches to assessment of training has not kept pace. We conducted a fully controlled experiment using a semester-long science communication course, and audience assessment of communicator performance. Evaluators scored the communication competence of trainees and their matched, untrained controls, before and after training. Bayesian analysis of the data showed very small gains in communication skills of trainees, and no difference from untrained controls. High variance in scores suggests little agreement on what constitutes “good” communication. 
    more » « less
  2. Graduate students emerging from STEM programs face inequitable professional landscapes in which their ability to practice inclusive and effective science communication with interdisciplinary and public audiences is essential to their success. Yet these students are rarely offered the opportunity to learn and practice inclusive science communication in their graduate programs. Moreover, minoritized students rarely have the opportunity to validate their experiences among peers and develop professional sensibilities through research training. In this article, the authors offer the Science Communication (Sci/Comm) Scholar’s working group at The University of Texas at San Antonio as one model for training graduate students in human dimensions and inclusive science communication for effective public engagement in thesis projects and beyond. The faculty facilitated peer-to-peer working group encouraged participation by women who often face inequities in STEM workplaces. Early results indicate that team-based training in both the science and art of public engagement provides critical exposure to help students understand the methodological care needed for human dimensions research, and to facilitate narrative-based citizen science engagements. The authors demonstrate this through several brief profiles of environmental science graduate students’ thesis projects. Each case emphasizes the importance of research design for public engagement via quantitative surveys and narrative-based science communication interventions. Through a faculty facilitated peer-to-peer working group framework, research design and methodological care function as an integration point for social scientific and rhetorical training for inclusive science communication with diverse audiences. 
    more » « less
  3. null (Ed.)
    Concerns about the spread of misinformation online via news articles have led to the development of many tools and processes involving human annotation of their credibility. However, much is still unknown about how different people judge news credibility or the quality or reliability of news credibility ratings from populations of varying expertise. In this work, we consider credibility ratings from two “crowd” populations: 1) students within journalism or media programs, and 2) crowd workers on UpWork, and compare them with the ratings of two sets of experts: journalists and climate scientists, on a set of 50 climate-science articles. We find that both groups’ credibility ratings have higher correlation to journalism experts compared to the science experts, with 10-15 raters to achieve convergence. We also find that raters’ gender and political leaning impact their ratings. Among article genre of news/opinion/analysis and article source leaning of left/center/right, crowd ratings were more similar to experts respectively with opinion and strong left sources. 
    more » « less
  4. Concerns about the spread of misinformation online via news articles have led to the development of many tools and processes involving human annotation of their credibility. However, much is still unknown about how different people judge news credibility or the quality or reliability of news credibility ratings from populations of varying expertise. In this work, we consider credibility ratings from two “crowd” populations: 1) students within journalism or media programs, and 2) crowd workers on UpWork, and compare them with the ratings of two sets of experts: journalists and climate scientists, on a set of 50 climate-science articles. We find that both groups’ credibility ratings have higher correlation to journalism experts compared to the science experts, with 10-15 raters to achieve convergence. We also find that raters’ gender and political leaning impact their ratings. Among article genre of news/opinion/analysis and article source leaning of left/center/right, crowd ratings were more similar to experts respectively with opinion and strong left sources. 
    more » « less
  5. When scientists disseminate their work to the general public, excessive use of jargon should be avoided because if too much technical language is used, the message is not effectively conveyed. However, determining which words are jargon and how much jargon is too much is a difficult task, partly because it can be challenging to know which terms the general public knows, and partly that it can be challenging to ensure scientific accuracy while avoiding esoteric terminology. To help address this issue, we have written an R script that an author can use to quantify the amount of scientific jargon in any written piece and make appropriate edits based on the target audience. 
    more » « less