Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
null (Ed.)There is interest in using social media content to supplement or even substitute for survey data. In one of the first studies to test the feasibility of this idea, O’Connor, Balasubramanyan, Routledge, and Smith report reasonably high correlations between the sentiment of tweets containing the word “jobs” and survey-based measures of consumer confidence in 2008–2009. Other researchers report a similar relationship through 2011, but after that time it is no longer observed, suggesting such tweets may not be as promising an alternative to survey responses as originally hoped. But, it’s possible that with the right analytic techniques, the sentiment of “jobs” tweets might still be an acceptable alternative. To explore this, we first classify “jobs” tweets into categories whose content is either related to employment or not, to see whether sentiment of the former correlates more highly with a survey-based measure of consumer sentiment. We then compare the relationship when sentiment is determined with traditional dictionary-based methods versus newer machine learning-based tools developed for Twitter-like texts. We calculated daily sentiment in three different ways and used a measure of association less sensitive to outliers than correlation. None of these approaches improved the size of the relationship in the original or more recent data. We found that the many micro-decisions these analyses require, such as the size of the smoothing interval and the length of the lag between the two series, can significantly affect the outcomes. In the end, despite the earlier promise of tweets as an alternative to survey responses, we find no evidence that the original relationship in these data was more than a chance occurrence.more » « less
-
This paper summarizes a set of design considerations that survey researchers exploring the potential for live video to substitute for in-person interviewing will need to address. While the solutions appropriate for a particular study are likely to vary, researchers will need to consider (at least) which sample members have access to video and will be willing and able to participate, which video platform(s) to use, whether interviews need to be scheduled in advance or are conducted on demand, how interviewers’ screens should be configured, the interviewer’s visual background and auditory environment, and how interviewers should be trained to administer video interviews, avoid bias, and be prepared to handle technological problems as they arise.more » « less
-
Abstract This paper examines when conceptual misalignments in dialog lead to consequential miscommunication. Two studies explore misunderstanding in survey interviews of the sort conducted by governments and social scientists, where mismeasurement can have real social costs. In 131 interviews about tobacco use, misalignment between respondents' and researchers' conceptions of ordinary expressions like “smoking” and “every day” was quantified by probing respondents' interpretations of survey terms and re‐administering the survey questionnaire with standard definitions after the interview. Respondents' interpretations were surprisingly variable, and in many cases they did not match the conceptions that researchers intended them to use. More often than one might expect, this conceptual variability was consequential, leading to answers (and, in principle, to estimates of the prevalence of smoking and related attributes in the population) that would have been different had conceptualizations been aligned; for example, fully 12% of respondents gave a different answer about having smoked 100 cigarettes in their entire life when later given a standard definition. In other cases misaligned interpretations did not lead to miscommunication, in that the differences would not have led to different survey responses. Although clarification of survey terms during the interview sometimes improved conceptual alignment, this was not guaranteed; in this corpus some needed attempts at clarification were never made, some attempts did not succeed, and some seemed to make understanding worse. The findings suggest that conceptual misalignments may be more frequent in ordinary conversation than interlocutors know, and that attempts to detect and clarify them may not always work. They also suggest that at least some unresolved misunderstandings do not matter in the sense that they do not change the outcome of the communication—in this case, the survey estimates.