skip to main content

Search for: All records

Creators/Authors contains: "Jena, D."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 1, 2023
  2. Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the generated text is repetitive, generic, selfcontradictory, and often lacks commonsense. Our analyses on sentence-level attention patterns in LMs reveal that neural degeneration may be associated with insufficient learning of task-specific characteristics by the attention mechanism. This finding motivates onthe-fly attention modulation1– a simple but effective method that enables the injection of priors into attention computation during inference. Automatic and human evaluation results on three text generation benchmarks demonstrate that attention modulation helps LMs generate text with enhanced fluency, creativity, and commonsense reasoning, inmore »addition to significantly reduce sentence-level repetition.« less
  3. Free, publicly-accessible full text available July 1, 2023
  4. Family planning programs are believed to have substantial long-term benefits for women’s health and well-being, yet few studies have established either extent or direction of long-term effects. The Matlab, Bangladesh, maternal and child health/family planning (MCH/FP) program afforded a 12-y period of well-documented differential access to services. We evaluate its impacts on women’s lifetime fertility, adult health, and economic outcomes 35 y after program initiation. We followed 1,820 women who were of reproductive age during the differential access period (born 1938–1973) from 1978 to 2012 using prospectively collected data from the Matlab Health and Demographic Surveillance System and the 1996more »and 2012 Matlab Health and Socioeconomic Surveys. We estimated intent-to-treat single-difference models comparing treatment and comparison area women. MCH/FP significantly increased contraceptive use, reduced completed fertility, lengthened birth intervals, and reduced age at last birth, but had no significant positive impacts on health or economic outcomes. Treatment area women had modestly poorer overall health (+0.07 SD) and respiratory health (+0.12 SD), and those born 1950–1961 had significantly higher body mass index (BMI) in 1996 (0.76 kg/m2) and 2012 (0.57 kg/m2); fewer were underweight in 1996, but more were overweight or obese in 2012. Overall, there was a +2.5 kg/m2secular increase in BMI. We found substantial changes in lifetime contraceptive and fertility behavior but no long-term health or economic benefits of the program. We observed modest negative health impacts that likely result from an accelerated nutritional transition among treated women, a transition that would, in an earlier context, have been beneficial.

    « less
  5. Multimodal disinformation, from `deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered vacation photo. The difference between this example, and harmful edits that spread disinformation, is one of intent. Recognizing and describing this intent is a major challenge for today's AI systems. We present the task of Edited Media Understanding, requiring models to answer open-ended questions that capture the intent and implications of an image edit. We introduce a dataset for our task, EMU, with 48k question-answer pairs written inmore »rich natural language. We evaluate a wide variety of vision-and-language models for our task, and introduce a new model PELICAN, which builds upon recent progress in pretrained multimodal representations. Our model obtains promising results on our dataset, with humans rating its answers as accurate 40.35% of the time. At the same time, there is still much work to be done -- humans prefer human-annotated captions 93.56% of the time -- and we provide analysis that highlights areas for further progress.« less
  6. While many languages use adpositions to encode semantic relationships between content words in a sentence (e.g., agentivity or temporality), the details of how adpositions work vary widely across languages with respect to both form and meaning. In this paper, we empirically adapt the SNACS framework (Schneider et al., 2018) to Korean, a language that is typologically distant from English—the language SNACS was based on. We apply the SNACS framework to annotate the highly popular novella The Little Prince with semantic supersense labels over allKorean postpositions. Thus, we introduce the first broad-coverage corpus annotated with Korean postposition semantics and provide amore »detailed analysis of the corpus with an apples-to-apples comparison between Korean and English annotations.« less