Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            We introduce a novel actor-critic framework that utilizes vision-language models (VLMs) and large language models (LLMs) for design concept generation, particularly for producing a diverse array of innovative solutions to a given design problem. By leveraging the extensive data repositories and pattern recognition capabilities of these models, our framework achieves this goal through enabling iterative interactions between two VLM agents: an actor (i.e., concept generator) and a critic. The actor, a custom VLM (e.g., GPT-4) created using few-shot learning and fine-tuning techniques, generates initial design concepts that are improved iteratively based on guided feedback from the critic—a prompt-engineered LLM or a set of design-specific quantitative metrics. This process aims to optimize the generated concepts with respect to four metrics: novelty, feasibility, problem–solution relevancy, and variety. The framework incorporates both long-term and short-term memory models to examine how incorporating the history of interactions impacts decision-making and concept generation outcomes. We explored the efficacy of incorporating images alongside text in conveying design ideas within our actor–critic framework by experimenting with two mediums for the agents: vision language and language only. We extensively evaluated the framework through a case study using the AskNature dataset, comparing its performance against benchmarks such as GPT-4 and real-world biomimetic designs across various industrial examples. Our findings underscore the framework’s capability to iteratively refine and enhance the initial design concepts, achieving significant improvements across all metrics. We conclude by discussing the implications of the proposed framework for various design domains, along with its limitations and several directions for future research in this domain.more » « lessFree, publicly-accessible full text available September 1, 2026
- 
            Yamashita, Naomi; Evers, Vanessa; Yatani, Koji; Ding, Xianghua Sharon (Ed.)Free, publicly-accessible full text available April 25, 2026
- 
            Free, publicly-accessible full text available March 4, 2026
- 
            Generative adversarial networks (GANs) have recently been proposed as a potentially disruptive approach to generative design due to their remarkable ability to generate visually appealing and realistic samples. Yet, we show that the current generator-discriminator architecture inherently limits the ability of GANs as a design concept generation (DCG) tool. Specifically, we conduct a DCG study on a large-scale dataset based on a GAN architecture to advance the understanding of the performance of these generative models in generating novel and diverse samples. Our findings, derived from a series of comprehensive and objective assessments, reveal that while the traditional GAN architecture can generate realistic samples, the generated and style-mixed samples closely resemble the training dataset, exhibiting significantly low creativity. We propose a new generic architecture for DCG with GANs (DCG-GAN) that enables GAN-based generative processes to be guided by geometric conditions and criteria such as novelty, diversity and desirability. We validate the performance of the DCG-GAN model through a rigorous quantitative assessment procedure and an extensive qualitative assessment involving 89 participants. We conclude by providing several future research directions and insights for the engineering design community to realize the untapped potential of GANs for DCG.more » « less
- 
            Aspect-based sentiment analysis (ABSA) enables a systematic identification of user opinions on particular aspects, thus enhancing the idea creation process in the initial stages of product/service design. Attention-based large language models (LLMs) like BERT and T5 have proven powerful in ABSA tasks. Yet, several key limitations remain, both regarding the ABSA task and the capabilities of attention-based models. First, existing research mainly focuses on relatively simpler ABSA tasks such as aspect-based sentiment analysis, while the task of extracting aspect, opinion, and sentiment in a unified model remains largely unaddressed. Second, current ABSA tasks overlook implicit opinions and sentiments. Third, most attention-based LLMs like BERT use position encoding in a linear projected manner or through split-position relations in word distance schemes, which could lead to relation biases during the training process. This article addresses these gaps by (1) creating a new annotated dataset with five types of labels, including aspect, category, opinion, sentiment, and implicit indicator (ACOSI), (2) developing a unified model capable of extracting all five types of labels simultaneously in a generative manner, and (3) designing a new position encoding method in the attention-based model. The numerical experiments conducted on a manually labeled dataset scraped from three major e-Commerce retail stores for apparel and footwear products demonstrate the performance, scalability, and potential of the framework developed. The article concludes with recommendations for future research on automated need finding and sentiment analysis for user-centered design.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available