Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract This paper presents a framework to describe and explain human-machine collaborative design focusing on Design Space Exploration (DSE), which is a popular method used in the early design of complex systems with roots in the well-known design as exploration paradigm. The human designer and a cognitive design assistant are both modeled as intelligent agents, with an internal state (e.g., motivation, cognitive workload), a knowledge state (separated in domain, design process, and problem specific knowledge), an estimated state of the world (i.e., status of the design task) and of the other agent, a hierarchy of goals (short-term and long-term, design and learning goals) and a set of long-term attributes (e.g., Kirton’s Adaption-Innovation inventory style, risk aversion). The framework emphasizes the relation between design goals and learning goals in DSE, as previously highlighted in the literature (e.g., Concept-Knowledge theory, LinD model) and builds upon the theory of common ground from human-computer interaction (e.g., shared goals, plans, attention) as a building block to develop successful assistants and interactions. Recent studies in human-AI collaborative DSE are reviewed from the lens of the proposed framework, and some new research questions are identified. This framework can help advance the theory of human-AI collaborative design by helping design researchers build promising hypotheses, and design studies to test these hypotheses that consider most relevant factors.more » « less
-
Abstract Deep generative models have shown significant promise in improving performance in design space exploration. But there is limited understanding of their interpretability, a necessity when model explanations are desired and problems are ill-defined. Interpretability involves learning design features behind design performance, called designer learning. This study explores human–machine collaboration’s effects on designer learning and design performance. We conduct an experiment (N = 42) designing mechanical metamaterials using a conditional variational autoencoder. The independent variables are: (i) the level of automation of design synthesis, e.g., manual (where the user manually manipulates design variables), manual feature-based (where the user manipulates the weights of the features learned by the encoder), and semi-automated feature-based (where the agent generates a local design based on a start design and user-selected step size); and (ii) feature semanticity, e.g., meaningful versus abstract features. We assess feature-specific learning using item response theory and design performance using utopia distance and hypervolume improvement. The results suggest that design performance depends on the subjects’ feature-specific knowledge, emphasizing the precursory role of learning. The semi-automated synthesis locally improves the utopia distance. Still, it does not result in higher global hypervolume improvement compared to manual design synthesis and reduced designer learning compared to manual feature-based synthesis. The subjects learn semantic features better than abstract features only when design performance is sensitive to them. Potential cognitive constructs influencing learning in human–machine collaborative settings are discussed, such as cognitive load and recognition heuristics.more » « less
An official website of the United States government

Full Text Available