skip to main content


Title: Toward a Theory of Systems Engineering Processes: A Principal–Agent Model of a One-Shot, Shallow Process
Systems engineering processes (SEPs) coordinate the effort of different individuals to generate a product satisfying certain requirements. As the involved engineers are self-interested agents, the goals at different levels of the systems engineering hierarchy may deviate from the system-level goals, which may cause budget and schedule overruns. Therefore, there is a need of a systems engineering theory that accounts for the human behavior in systems design. As experience in the physical sciences shows, a lot of knowledge can be generated by studying simple hypothetical scenarios, which nevertheless retain some aspects of the original problem. To this end, the objective of this article is to study the simplest conceivable SEP, a principalagent model of a one-shot, shallow SEP. We assume that the systems engineer (SE) maximizes the expected utility of the system, while the subsystem engineers (sSE) seek to maximize their expected utilities. Furthermore, the SE is unable to monitor the effort of the sSE and may not have complete information about their types. However, the SE can incentivize the sSE by proposing specific contracts. To obtain an optimal incentive, we pose and solve numerically a bilevel optimization problem. Through extensive simulations, we study the optimal incentives arising from different system-level value functions under various combinations of effort costs, problem-solving skills, and task complexities. Our numerical examples show that, the passed-down requirements to the agents increase as the task complexity and uncertainty grow and they decrease with increasing the agents' costs.  more » « less
Award ID(s):
1728165
NSF-PAR ID:
10174145
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE Systems Journal
ISSN:
1932-8184
Page Range / eLocation ID:
1 to 12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Systems engineering processes coordinate the efforts of many individuals to design a complex system. However, the goals of the involved individuals do not necessarily align with the system-level goals. Everyone, including managers, systems engineers, subsystem engineers, component designers, and contractors, is self-interested. It is not currently understood how this discrepancy between organizational and personal goals affects the outcome of complex systems engineering processes. To answer this question, we need a systems engineering theory that accounts for human behavior. Such a theory can be ideally expressed as a dynamic hierarchical network game of incomplete information. The nodes of this network represent individual agents and the edges the transfer of information and incentives. All agents decide independently on how much effort they should devote to a delegated task by maximizing their expected utility; the expectation is over their beliefs about the actions of all other individuals and the moves of nature. An essential component of such a model is the quality function, defined as the map between an agent’s effort and the quality of their job outcome. In the economics literature, the quality function is assumed to be a linear function of effort with additive Gaussian noise. This simplistic assumption ignores two critical factors relevant to systems engineering: (1) the complexity of the design task, and (2) the problem-solving skills of the agent. Systems engineers establish their beliefs about these two factors through years of job experience. In this paper, we encode these beliefs in clear mathematical statements about the form of the quality function. Our approach proceeds in two steps: (1) we construct a generative stochastic model of the delegated task, and (2) we develop a reduced order representation suitable for use in a more extensive game-theoretic model of a systems engineering process. Focusing on the early design stages of a systems engineering process, we model the design task as a function maximization problem and, thus, we associate the systems engineer’s beliefs about the complexity of the task with their beliefs about the complexity of the function being maximized. Furthermore, we associate an agent’s problem solving-skills with the strategy they use to solve the underlying function maximization problem. We identify two agent types: “naïve” (follows a random search strategy) and “skillful” (follows a Bayesian global optimization strategy). Through an extensive simulation study, we show that the assumption of the linear quality function is only valid for small effort levels. In general, the quality function is an increasing, concave function with derivative and curvature that depend on the problem complexity and agent’s skills. 
    more » « less
  2. We present a principal-agent model of a one-shot, shallow, systems engineering process. The process is "one-shot" in the sense that decisions are made during a one-time step and that they are final. The term "shallow" refers to a one-layer hierarchy of the process. Specifically, we assume that the systems engineer has already decomposed the problem in subsystems and that each subsystem is assigned to a different subsystem engineer. Each subsystem engineer works independently to maximize their own expected payoff. The goal of the systems engineer is to maximize the system-level payoff by incentivizing the subsystem engineers. We restrict our attention to requirements-based system-level payoffs, i.e., the systems engineer makes a profit only if all the design requirements are met. We illustrate the model using the design of an Earth-orbiting satellite system where the systems engineer determines the optimum incentive structures and requirements for two subsystems: the propulsion subsystem and the power subsystem. The model enables the analysis of a systems engineer's decisions about optimal passed-down requirements and incentives for sub-system engineers under different levels of task difficulty and associated costs. Sample results, for the case of risk-neutral systems and subsystems engineers, show that it is not always in the best interest of the systems engineer to pass down the true requirements. As expected, the model predicts that for small to moderate task uncertainties the optimal requirements are higher than the true ones, effectively eliminating the probability of failure for the systems engineer. In contrast, the model predicts that for large task uncertainties the optimal requirements should be smaller than the true ones in order to lure the subsystem engineers into participation. 
    more » « less
  3. There is growing evidence of the effectiveness of project-based learning (PBL) in preparing students to solve complex problems. In PBL implementations in engineering, students are treated as professional engineers facing projects centered around real-world problems, including the complexity and uncertainty that influence such problems. Not only does this help students to analyze and solve an authentic real-world task, promoting critical thinking, but also students learn from each other, learning valuable communication and teamwork skills. Faculty play an important part by assuming non-conventional roles (e.g., client, senior professional engineer, consultant) to help students throughout this instructional and learning approach. Typically in PBLs, students work on projects over extended periods of time that culminate in realistic products or presentations. In order to be successful, students need to learn how to frame a problem, identify stakeholders and their requirements, design and select concepts, test them, and so on. Two different implementations of PBL projects in a fluid mechanics course are presented in this paper. This required, junior-level course has been taught since 2014 by the same instructor. The first PBL project presented is a complete design of pumped pipeline systems for a hypothetical plant. In the second project, engineering students partnered with pre-service teachers to design and teach an elementary school lesson on fluid mechanics concepts. With the PBL implementations, it is expected that students: 1) engage in a deeper learning process where concepts can be reemphasized, and students can realize applicability; 2) develop and practice teamwork skills; 3) learn and practice how to communicate effectively to peers and to those from other fields; and 4) increase their confidence working on open-ended situations and problems. The goal of this paper is to present the experiences of the authors with both PBL implementations. It explains how the projects were scaffolded through the entire semester, including how the sequence of course content was modified, how team dynamics were monitored, the faculty roles, and the end products and presentations. Students' experiences are also presented. To evaluate and compare students’ learning and satisfaction with the team experience between the two PBL implementations, a shortened version of the NCEES FE exam and the Comprehensive Assessment of Team Member Effectiveness (CATME) survey were utilized. Students completed the FE exam during the first week and then again during the last week of the semester in order to assess students’ growth in fluid mechanics knowledge. The CATME survey was completed mid-semester to help faculty identify and address problems within team dynamics, and at the end of the semester to evaluate individual students’ teamwork performance. The results showed that no major differences were observed in terms of the learned fluid mechanics content, however, the data showed interesting preliminary observations regarding teamwork satisfaction. Through reflective assignments (e.g., short answer reflections, focus groups), student perceptions of the PBL implementations are discussed in the paper. Finally, some of the challenges and lessons learned from implementing both projects multiple times, as well as access to some of the PBL course materials and assignments will be provided. 
    more » « less
  4. Abstract

    In this paper, we investigate the dichotomy between system design delegation driven by requirement allocation and delegation driven by objective allocation. Specifically, we investigate this dichotomy through the lens of agency theory, which addresses cases where an agent makes decisions on behalf of another, that is, a principal. In current practice, design delegation largely involves requirement allocation as a means to inform agents of the desirable system characteristics. The value‐driven design paradigm proposes replacing requirements with objective, or trade‐off, functions to better guide agents toward optimal systems. We apply and adapt the principal–agent mathematical model to the design delegation problem to determine if a principal, that is, the delegator, should communicate using requirements or objectives with her agent. In this model, we assume the case of a single principal and single agent where the agent has certain domain knowledge the principal does not have and the agent accrues costs while solving a delegated design problem. Under the assumptions of the mathematical model, we show that the requirement allocation paradigm can yield greater value to the principal over objective despite limitations requirement allocation places on the principal to learn information from the agent. However, relaxing model assumptions can impact the value proposition of requirement allocation in favor of objective allocation. Therefore, a resolution to the requirement–objective dichotomy may be context dependent. The results and the analytical framework used to derive them provide a new, foundational perspective with which to investigate allocation strategies.

     
    more » « less
  5. Nowadays, there is a fast-paced shift from legacy telecommunication systems to novel software-defined network (SDN) architectures that can support on-the-fly network reconfiguration, therefore, empowering advanced traffic engineering mechanisms. Despite this momentum, migration to SDN cannot be realized at once especially in high-end networks of Internet service providers (ISPs). It is expected that ISPs will gradually upgrade their networks to SDN over a period that spans several years. In this paper, we study the SDN upgrading problem in an ISP network: which nodes to upgrade and when we consider a general model that captures different migration costs and network topologies, and two plausible ISP objectives: 1) the maximization of the traffic that traverses at least one SDN node, and 2) the maximization of the number of dynamically selectable routing paths enabled by SDN nodes. We leverage the theory of submodular and supermodular functions to devise algorithms with provable approximation ratios for each objective. Using realworld network topologies and traffic matrices, we evaluate the performance of our algorithms and show up to 54% gains over state-of-the-art methods. Moreover, we describe the interplay between the two objectives; maximizing one may cause a factor of 2 loss to the other. We also study the dual upgrading problem, i.e., minimizing the upgrading cost for the ISP while ensuring specific performance goals. Our analysis shows that our proposed algorithm can achieve up to 2.5 times lower cost to ensure performance goals over state-of-the-art methods. 
    more » « less