skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Aspect-Sentiment-Guided Opinion Summarization for User Need Elicitation From Online Reviews
Extracting and analyzing informative user opinion from large-scale online reviews is a key success factor in product design processes. However, user reviews are naturally unstructured, noisy, and verbose. Recent advances in abstractive text summrization provide an unprecedented opportunity to systematically generate summaries of user opinions to facilitate need finding for designers. Yet, two main gaps in the state-of-the-art opinion summarization methods limit their applicability to the product design domain. First is the lack of capabilities to guide the generative process with respect to various product aspects and user sentiments (e.g., polarity, subjectivity), and the second gap is the lack of annotated training datasets for supervised learning. This paper tackles these gaps by (1) devising an efficient and scalable methodology for abstractive opinion summarization from online reviews guided by aspects terms and sentiment polarities, and (2) automatically generating a reusable synthetic training dataset that captures various degrees of granularity and polarity. The methodology contributes a multi-instance pooling model with aspect and sentiment information integrated (MAS), a synthetic data assembled using the results of the MAS model, and a fine-tuned pretrained sequence-to-sequence model “T5” for summary generation. Numerical experiments are conducted on a large dataset scraped from a major e-commerce retail store for sneakers to demonstrate the performance, feasibility, and potentials of the developed methodology. Several directions are provided for future exploration in the area of automated opinion summarization for user-centered product design.  more » « less
Award ID(s):
2050052
PAR ID:
10387290
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Aspect-Sentiment-Guided Opinion Summarization for User Need Elicitation From Online Reviews
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Eliciting informative user opinions from online reviews is a key success factor for innovative product design and development. The unstructured, noisy, and verbose nature of user reviews, however, often complicate large-scale need finding in a format useful for designers without losing important information. Recent advances in abstractive text summarization has created the opportunity to systematically generate opinion summaries from online reviews to inform the early stages of product design and development. However, two knowledge gaps hinder the applicability of opinion summarization methods in practice. First, there is a lack of formal mechanisms to guide the generative process with respect to different categories of product attributes and user sentiments. Second, the annotated training datasets needed for supervised training of abstractive summarization models are often difficult and costly to create. This article addresses these gaps by (1) devising an efficient computational framework for abstractive opinion summarization guided by specific product attributes and sentiment polarities, and (2) automatically generating a synthetic training dataset that captures various degrees of granularity and polarity. A hierarchical multi-instance attribute-sentiment inference mode is developed for assembling a high-quality synthetic dataset, which is utilized to fine-tune a pretrained language model for abstractive summary generation. Numerical experiments conducted on a large dataset scraped from three major e-Commerce retail store for apparel and footwear products indicate the performance, feasibility, and potentials of the developed framework. Several directions are provided for future exploration in the area of automated opinion summarization for user-centered design. 
    more » « less
  2. Aspect-based sentiment analysis (ABSA) provides an opportunity to systematically generate user's opinions of specific aspects to enrich the idea creation process in the early stage of product/service design process. Yet, the current ABSA task has two major limitations. First, existing research mostly focusing on the subsets of ABSA task, e.g. aspect-sentiment extraction, extract aspect, opinion, and sentiment in a unified model is still an open problem. Second, the implicit opinion and sentiment are ignored in the current ABSA task. This article tackles these gaps by (1) creating a new annotated dataset comprised of five types of labels, including aspect, category, opinion, sentiment, and implicit indicator (ACOSI) and (2) developing a unified model which could extract all five types of labels simultaneously in a generative manner. Numerical experiments conducted on the manually labeled dataset originally scraped from three major e-Commerce retail stores for apparel and footwear products indicate the performance, scalability, and potentials of the framework developed. Several directions are provided for future exploration in the area of automated aspect-based sentiment analysis for user-centered design. 
    more » « less
  3. Aspect-based sentiment analysis (ABSA) enables a systematic identification of user opinions on particular aspects, thus enhancing the idea creation process in the initial stages of product/service design. Attention-based large language models (LLMs) like BERT and T5 have proven powerful in ABSA tasks. Yet, several key limitations remain, both regarding the ABSA task and the capabilities of attention-based models. First, existing research mainly focuses on relatively simpler ABSA tasks such as aspect-based sentiment analysis, while the task of extracting aspect, opinion, and sentiment in a unified model remains largely unaddressed. Second, current ABSA tasks overlook implicit opinions and sentiments. Third, most attention-based LLMs like BERT use position encoding in a linear projected manner or through split-position relations in word distance schemes, which could lead to relation biases during the training process. This article addresses these gaps by (1) creating a new annotated dataset with five types of labels, including aspect, category, opinion, sentiment, and implicit indicator (ACOSI), (2) developing a unified model capable of extracting all five types of labels simultaneously in a generative manner, and (3) designing a new position encoding method in the attention-based model. The numerical experiments conducted on a manually labeled dataset scraped from three major e-Commerce retail stores for apparel and footwear products demonstrate the performance, scalability, and potential of the framework developed. The article concludes with recommendations for future research on automated need finding and sentiment analysis for user-centered design. 
    more » « less
  4. Proc. 2023 ACM Int. Conf. on Web Search and Data Mining (Ed.)
    Target-oriented opinion summarization is to profile a target by extracting user opinions from multiple related documents. Instead of simply mining opinion ratings on a target (e.g., a restaurant) or on multiple aspects (e.g., food, service) of a target, it is desirable to go deeper, to mine opinion on fine-grained sub-aspects (e.g., fish). However, it is expensive to obtain high-quality annotations at such fine-grained scale. This leads to our proposal of a new framework, FineSum, which advances the frontier of opinion analysis in three aspects: (1) minimal supervision, where no document-summary pairs are provided, only aspect names and a few aspect/sentiment keywords are available; (2) fine-grained opinion analysis, where sentiment analysis drills down to a specific subject or characteristic within each general aspect; and (3) phrase-based summarization, where short phrases are taken as basic units for summarization, and semantically coherent phrases are gathered to improve the consistency and comprehensiveness of summary. Given a large corpus with no annotation, FineSum first automatically identifies potential spans of opinion phrases, and further reduces the noise in identification results using aspect and sentiment classifiers. It then constructs multiple fine-grained opinion clusters under each aspect and sentiment. Each cluster expresses uniform opinions towards certain sub-aspects (e.g., “fish” in “food” aspect) or characteristics (e.g., “Mexican” in “food” aspect). To accomplish this, we train a spherical word embedding space to explicitly represent different aspects and sentiments. We then distill the knowledge from embedding to a contextualized phrase classifier, and perform clustering using the contextualized opinion-aware phrase embedding. Both automatic evaluations on the benchmark and quantitative human evaluation validate the effectiveness of our approach. 
    more » « less
  5. We investigate pre-training techniques for abstractive multi-document summarization (MDS), which is much less studied than summarizing single documents. Though recent work has demonstrated the effectiveness of highlighting information salience for pretraining strategy design, they struggle to generate abstractive and reflective summaries, which are critical properties for MDS. To this end, we present PELMS, a pre-trained model that uses pre-training objectives based on semantic coherence heuristics and faithfulness constraints together with unlabeled multi-document inputs, to promote the generation of concise, fluent, and faithful summaries. To support the training of PELMS, we compile MultiPT, a multidocument pre-training corpus containing over 93 million documents to form more than 3 million unlabeled topic-centric document clusters, covering diverse genres such as product reviews, news, and general knowledge. We perform extensive evaluation of PELMS in lowshot settings on a wide range of MDS datasets. Our approach consistently outperforms competitive comparisons with respect to overall informativeness, abstractiveness, coherence, and faithfulness, and with minimal fine-tuning can match performance of language models at a much larger scale (e.g., GPT-4). 
    more » « less