Throughout the design process, designers encounter diverse stimuli that influence their work. This influence is particularly notable during idea generation processes that are augmented by novel design support tools that assist in inspiration discovery. However, fundamental questions remain regarding why and how interactions afforded by these tools impact design behaviors. This work explores how designers search for inspirational stimuli using an AI-enabled multi-modal search platform, which supports queries by text and non-text-based inputs. Student and professional designers completed a think-aloud design exploration task using this platform to search for stimuli to inspire idea generation. We identify expertise and search modality as factors influencing design exploration, including the frequency and framing of searches, and the evaluation and utility of search results.
more »
« less
Investigating the Roles of Expertise and Modality in Designers’ Search for Inspirational Stimuli
Designers can benefit from inspirational stimuli when presented during the design process. Encountering external stimuli can also lead designers to negative design outcomes by limiting exploration of the design space and idea generation. Prior work has investigated how specific features of inspirational stimuli can be beneficial or harmful to designers. However, the processes designers use to search for and discover inspirational stimuli leading to these outcomes are less known. The objective of this work is thus to better understand how designers search for inspirational design stimuli. Specifically, we investigate how factors such as designer expertise and search modality (e.g., text vs. visual-based) impact both explicit and implicit features during the search for design stimuli. A cognitive study was completed by novice and expert designers (seven students and eight professionals), who searched for design stimuli using a novel multi-modal search platform while following a think-aloud protocol. The multi-modal search platform enabled search using text and nontext inputs, and provided design stimuli in the form of 3D-model parts. This work presents methods to describe search processes in terms of three levels: activities, behaviors, and pathways, as defined in this paper. Our findings determine that design expertise and search modality influence search behavior. Illustrative examples are presented and discussed of search processes leading designers to both negative and beneficial outcomes, such as designers fixating on specific results or benefiting unexpectedly from unintentional inspirational stimuli. Overall, this work contributes to an improved understanding of how designers search for inspiration, and key factors influencing these behaviors.
more »
« less
- Award ID(s):
- 2145432
- PAR ID:
- 10388777
- Date Published:
- Journal Name:
- ASME IDETC, 34th International Conference on Design Theory and Methodology (DTM)
- Volume:
- 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Inspirational stimuli are known to be effective in supporting ideation during early-stage design. However, prior work has predominantly constrained designers to using text-only queries when searching for stimuli, which is not consistent with real-world design behavior where fluidity across modalities (e.g., visual, semantic, etc.) is standard practice. In the current work, we introduce a multi-modal search platform that retrieves inspirational stimuli in the form of 3D-model parts using text, appearance, and function-based search inputs. Computational methods leveraging a deep-learning approach are presented for designing and supporting this platform, which relies on deep-neural networks trained on a large dataset of 3D-model parts. This work further presents the results of a cognitive study ( n = 21) where the aforementioned search platform was used to find parts to inspire solutions to a design challenge. Participants engaged with three different search modalities: by keywords, 3D parts, and user-assembled 3D parts in their workspace. When searching by parts that are selected or in their workspace, participants had additional control over the similarity of appearance and function of results relative to the input. The results of this study demonstrate that the modality used impacts search behavior, such as in search frequency, how retrieved search results are engaged with, and how broadly the search space is covered. Specific results link interactions with the interface to search strategies participants may have used during the task. Findings suggest that when searching for inspirational stimuli, desired results can be achieved both by direct search inputs (e.g., by keyword) as well as by more randomly discovered examples, where a specific goal was not defined. Both search processes are found to be important to enable when designing search platforms for inspirational stimuli retrieval.more » « less
-
Abstract As inspirational stimuli can assist designers with achieving enhanced design outcomes, supporting the retrieval of impactful sources of inspiration is important. Existing methods facilitating this retrieval have relied mostly on semantic relationships, e.g., analogical distances. Increasingly, data-driven methods can be leveraged to represent diverse stimuli in terms of multi-modal information, enabling designers to access stimuli in terms of less explored, non-text-based relationships. Toward improved retrieval of multi-modal representations of inspirational stimuli, this work compares human-evaluated and computationally derived similarities between stimuli in terms of non-text-based visual and functional features. A human subjects study (n = 36) was conducted where similarity assessments between triplets of 3D-model parts were collected and used to construct psychological embedding spaces. Distances between unique part embeddings were used to represent similarities in terms of visual and functional features. Obtained distances were compared with computed distances between embeddings of the same stimuli generated using artificial intelligence (AI)-based deep-learning approaches. When used to assess similarity in appearance and function, these representations were found to be largely consistent, with highest agreement found when assessing pairs of stimuli with low similarity. Alignment between models was otherwise lower when identifying the same pairs of stimuli with higher levels of similarity. Importantly, qualitative data also revealed insights regarding how humans made similarity assessments, including more abstract information not captured using AI-based approaches. Toward providing inspiration to designers that considers design problems, ideas, and solutions in terms of non-text-based relationships, further exploration of how these relationships are represented and evaluated is encouraged.more » « less
-
Abstract Design artifacts provide a mechanism for illustrating design information and concepts, but their effectiveness relies on alignment across design agents in what these artifacts represent. This work investigates the agreement between multi-modal representations of design artifacts by humans and artificial intelligence (AI). Design artifacts are considered to constitute stimuli designers interact with to become inspired (i.e., inspirational stimuli), for which retrieval often relies on computational methods using AI. To facilitate this process for multi-modal stimuli, a better understanding of human perspectives of non-semantic representations of design information, e.g., by form or function-based features, is motivated. This work compares and evaluates human and AI-based representations of 3D-model parts by visual and functional features. Humans and AI were found to share consistent representations of visual and functional similarities, which aligned well with coarse, but not more granular, levels of similarity. Human–AI alignment was higher for identifying low compared to high similarity parts, suggesting mutual representation of features underlying more obvious than nuanced differences. Human evaluation of part relationships in terms of belonging to the same or different categories revealed that human and AI-derived relationships similarly reflect concepts of “near” and “far.” However, levels of similarity corresponding to “near” and “far” differed depending on the criteria evaluated, where “far” was associated with nearer visually than functionally related stimuli. These findings contribute to a fundamental understanding of human evaluation of information conveyed by AI-represented design artifacts needed for successful human–AI collaboration in design.more » « less
-
Abstract External sources of inspiration can promote the discovery of new ideas as designers ideate on a design task. Data-driven techniques can increasingly enable the retrieval of inspirational stimuli based on nontext-based representations, beyond semantic features of stimuli. However, there is a lack of fundamental understanding regarding how humans evaluate similarity between non-semantic design stimuli (e.g., visual). Toward this aim, this work examines human-evaluated and computationally derived representations of visual and functional similarities of 3D-model parts. A study was conducted where participants (n=36) assessed triplet ratings of parts and categorized these parts into groups. Similarity is defined by distances within embedding spaces constructed using triplet ratings and deep-learning methods, representing human and computational representations. Distances between stimuli that are grouped together (or not) are determined to understand how various methods and criteria used to define non-text-based similarity align with perceptions of 'near' and 'far'. Distinct boundaries in computed distances separating stimuli that are 'too far' were observed, which include farther stimuli when modeling visual vs. functional attributes.more » « less
An official website of the United States government

