skip to main content


Title: The lean startup method: Early‐stage teams and hypothesis‐based probing of business ideas
Abstract Research Summary

We examine a learning‐by‐doing methodology for iteration of early‐stage business ideas known as the “lean startup.” The purpose of this article is to lay out and test the key assumptions of the method, examining one particularly relevant boundary condition: the composition of the startup team. Using unique and detailed longitudinal data on 152 NSF‐supported lean‐startup (I‐Corps) teams, we find that the key components of the method—hypothesis formulation, probing, and business idea convergence—link up as expected. We also find that team composition is an important boundary condition: business‐educated (MBA) members resist the use of the method, but appreciate its value ex post. Formal training in learning‐by‐thinking methods thus appears to limit the spread of learning‐by‐doing methods. In this way, business theory constrains business practice.

Managerial Summary

Lean startup methodology has rapidly become one of the most common and trusted innovation and entrepreneurship methods by corporations, startup accelerators, and policymakers. Unfortunately, it has largely been portrayed as a one‐size‐fits‐all solution—its key assumptions subject to little rigorous empirical testing, and the possibility of critical boundary conditions ignored. Our empirical testing supports the key assumptions of the method, but points to business education of team members as a critical boundary condition. Specifically, MBAs resist the use of the method despite being in a strong position to leverage it. Results from a post hoc analysis we conducted also suggest that more engagement with the method relates to higher performance of the firm in the 18‐month period following the lean startup intervention.

 
more » « less
NSF-PAR ID:
10454725
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Strategic Entrepreneurship Journal
Volume:
14
Issue:
4
ISSN:
1932-4391
Page Range / eLocation ID:
p. 570-593
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Research summary

    One of the established findings in the spinout literature is that founders with prior industry experience assemble larger entrepreneurial teams and create better‐performing startups. We examine the role of prior industry experience in the startups' next stage—its hiring of new employees. We tackle two empirical challenges—the mutual aspect of hiring and the effect of unobserved variables on employees' earnings using a two‐sided matching model. Our results reveal that even firms founded by entrepreneurs without industry experience can attract new employees with such experience if the founders start with a large entrepreneurial team. Further, startups provide new hires with an earnings premium for their industry experience. Our approach illustrates the benefits of matching models over traditional regressions.

    Managerial summary

    Growing startups face the question of who to hire and how much to compensate the new hires. Simultaneously, prospective new hires ask which startup to join and how much their salary will be. We explore these questions using a novel method that tackles the mutual selection process. In the context of five technological manufacturing industries, we find that having industry experience within founding teams may not be necessary to attract new hires with high quality if the startup can signal its own quality through other means such as having a larger founding team. Our results indicate that startups prefer employees with industry experience for which startups offer a wage premium. Thus, employees seeking startup employment benefit from gaining industry experience prior to joining a startup.

    A video abstract is available athttps://youtu.be/w00YzYi5VqA.

     
    more » « less
  2. Abstract: 100 words Jurors are increasingly exposed to scientific information in the courtroom. To determine whether providing jurors with gist information would assist in their ability to make well-informed decisions, the present experiment utilized a Fuzzy Trace Theory-inspired intervention and tested it against traditional legal safeguards (i.e., judge instructions) by varying the scientific quality of the evidence. The results indicate that jurors who viewed high quality evidence rated the scientific evidence significantly higher than those who viewed low quality evidence, but were unable to moderate the credibility of the expert witness and apply damages appropriately resulting in poor calibration. Summary: <1000 words Jurors and juries are increasingly exposed to scientific information in the courtroom and it remains unclear when they will base their decisions on a reasonable understanding of the relevant scientific information. Without such knowledge, the ability of jurors and juries to make well-informed decisions may be at risk, increasing chances of unjust outcomes (e.g., false convictions in criminal cases). Therefore, there is a critical need to understand conditions that affect jurors’ and juries’ sensitivity to the qualities of scientific information and to identify safeguards that can assist with scientific calibration in the courtroom. The current project addresses these issues with an ecologically valid experimental paradigm, making it possible to assess causal effects of evidence quality and safeguards as well as the role of a host of individual difference variables that may affect perceptions of testimony by scientific experts as well as liability in a civil case. Our main goal was to develop a simple, theoretically grounded tool to enable triers of fact (individual jurors) with a range of scientific reasoning abilities to appropriately weigh scientific evidence in court. We did so by testing a Fuzzy Trace Theory-inspired intervention in court, and testing it against traditional legal safeguards. Appropriate use of scientific evidence reflects good calibration – which we define as being influenced more by strong scientific information than by weak scientific information. Inappropriate use reflects poor calibration – defined as relative insensitivity to the strength of scientific information. Fuzzy Trace Theory (Reyna & Brainerd, 1995) predicts that techniques for improving calibration can come from presentation of easy-to-interpret, bottom-line “gist” of the information. Our central hypothesis was that laypeople’s appropriate use of scientific information would be moderated both by external situational conditions (e.g., quality of the scientific information itself, a decision aid designed to convey clearly the “gist” of the information) and individual differences among people (e.g., scientific reasoning skills, cognitive reflection tendencies, numeracy, need for cognition, attitudes toward and trust in science). Identifying factors that promote jurors’ appropriate understanding of and reliance on scientific information will contribute to general theories of reasoning based on scientific evidence, while also providing an evidence-based framework for improving the courts’ use of scientific information. All hypotheses were preregistered on the Open Science Framework. Method Participants completed six questionnaires (counterbalanced): Need for Cognition Scale (NCS; 18 items), Cognitive Reflection Test (CRT; 7 items), Abbreviated Numeracy Scale (ABS; 6 items), Scientific Reasoning Scale (SRS; 11 items), Trust in Science (TIS; 29 items), and Attitudes towards Science (ATS; 7 items). Participants then viewed a video depicting a civil trial in which the defendant sought damages from the plaintiff for injuries caused by a fall. The defendant (bar patron) alleged that the plaintiff (bartender) pushed him, causing him to fall and hit his head on the hard floor. Participants were informed at the outset that the defendant was liable; therefore, their task was to determine if the plaintiff should be compensated. Participants were randomly assigned to 1 of 6 experimental conditions: 2 (quality of scientific evidence: high vs. low) x 3 (safeguard to improve calibration: gist information, no-gist information [control], jury instructions). An expert witness (neuroscientist) hired by the court testified regarding the scientific strength of fMRI data (high [90 to 10 signal-to-noise ratio] vs. low [50 to 50 signal-to-noise ratio]) and gist or no-gist information both verbally (i.e., fairly high/about average) and visually (i.e., a graph). After viewing the video, participants were asked if they would like to award damages. If they indicated yes, they were asked to enter a dollar amount. Participants then completed the Positive and Negative Affect Schedule-Modified Short Form (PANAS-MSF; 16 items), expert Witness Credibility Scale (WCS; 20 items), Witness Credibility and Influence on damages for each witness, manipulation check questions, Understanding Scientific Testimony (UST; 10 items), and 3 additional measures were collected, but are beyond the scope of the current investigation. Finally, participants completed demographic questions, including questions about their scientific background and experience. The study was completed via Qualtrics, with participation from students (online vs. in-lab), MTurkers, and non-student community members. After removing those who failed attention check questions, 469 participants remained (243 men, 224 women, 2 did not specify gender) from a variety of racial and ethnic backgrounds (70.2% White, non-Hispanic). Results and Discussion There were three primary outcomes: quality of the scientific evidence, expert credibility (WCS), and damages. During initial analyses, each dependent variable was submitted to a separate 3 Gist Safeguard (safeguard, no safeguard, judge instructions) x 2 Scientific Quality (high, low) Analysis of Variance (ANOVA). Consistent with hypotheses, there was a significant main effect of scientific quality on strength of evidence, F(1, 463)=5.099, p=.024; participants who viewed the high quality evidence rated the scientific evidence significantly higher (M= 7.44) than those who viewed the low quality evidence (M=7.06). There were no significant main effects or interactions for witness credibility, indicating that the expert that provided scientific testimony was seen as equally credible regardless of scientific quality or gist safeguard. Finally, for damages, consistent with hypotheses, there was a marginally significant interaction between Gist Safeguard and Scientific Quality, F(2, 273)=2.916, p=.056. However, post hoc t-tests revealed significantly higher damages were awarded for low (M=11.50) versus high (M=10.51) scientific quality evidence F(1, 273)=3.955, p=.048 in the no gist with judge instructions safeguard condition, which was contrary to hypotheses. The data suggest that the judge instructions alone are reversing the pattern, though nonsignificant, those who received the no gist without judge instructions safeguard awarded higher damages in the high (M=11.34) versus low (M=10.84) scientific quality evidence conditions F(1, 273)=1.059, p=.30. Together, these provide promising initial results indicating that participants were able to effectively differentiate between high and low scientific quality of evidence, though inappropriately utilized the scientific evidence through their inability to discern expert credibility and apply damages, resulting in poor calibration. These results will provide the basis for more sophisticated analyses including higher order interactions with individual differences (e.g., need for cognition) as well as tests of mediation using path analyses. [References omitted but available by request] Learning Objective: Participants will be able to determine whether providing jurors with gist information would assist in their ability to award damages in a civil trial. 
    more » « less
  3. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  4. ABSTRACT CONTEXT The peer review process plays a critical role in ensuring the quality of work published within a field and advancing the knowledge within the research community. However, for many members of the community, the process of peer review largely remains a black box to many scholars, especially those with less experience within the community. Therefore, there is a need to illuminate the peer review process for the research community. PURPOSE OR GOAL To more transparently reveal the contents of the black box around the peer review process, we interviewed editors (associate and deputy editors) for the Journal of Engineering Education (JEE) to provide editor perspectives on the overall peer review process. The goal of this paper is to clearly articulate the behind-the-scenes processes of peer review as well as the expectations and perceptions of the editors with respect to publishing within JEE. By bringing these processes to light, we hope that more members of the field will be aware of the overall process and the associated expectations for contributing to the field. APPROACH OR METHODOLOGY/METHODS To meet the goals of this study, we conducted semi-structured interviews with six editors of JEE who worked in the field of engineering education research (EER), as a part of a larger project exploring the boundaries of the field as expressed within the peer reviews process. The interviewer from the research team followed a protocol but also asked additional questions to elicit more details in some cases. The interviews were recorded, transcribed, and thematically coded using an open-coding process. ACTUAL OR ANTICIPATED OUTCOMES Based on the analysis of the editor interviews, we present three critical aspects of the peer review process: the types of editors, the process that editors typically conduct to identify reviewers, and the types of decisions through the process. Additionally, we highlight considerations and advice from the editors to help members of the EER community develop. CONCLUSIONS/RECOMMENDATIONS/SUMMARY The current study makes the editors’ perspectives and decision-making processes more explicit to readers. These decision-making processes are full of careful considerations and also challenges. By doing so, we hope to help the members of the EER community gain a better understanding of what is going on backstage of the peer review process. 
    more » « less
  5. Landscape architects and designers often use case study as a method towards understanding the built environment. Understanding opportunities and constraints of past projects is critical towards moving the profession forward while generating actionable and quantitative data for use in future projects. Firms rarely have the opportunity to learn from their own work as performing post-occupancy evaluation using traditional methods is costly and time consuming due to limited contracts and business repercussions of excessive non-billable time. Taking advantage of current artificial intelligence and deep learning technologies, a system is under development to automate data collection typical of post-occupancy evaluation and inventory. Using multiple cameras and (admittedly high end) consumer grade hardware, the system is capable of accurately identifying people within a defined area without recording personally identifying information or maintaining a log of people’s images. The unique ID codes assigned to an individual are then used in projecting the real-world location of people in two-dimensional space to generate a map of usage and movement within site boundaries. This system differs from existing processes by using permanent fixed cameras and lightweight processing algorithms to capture data throughout the entirety of a day regardless of time, weather, or schedule. This methodology avoids the time cost, limited time of assessment, and possible observational and individual biases (whether intentional or unconscious) inherent in sending designers on site to record site usage through traditional manual methods while mitigating issues with occlusion, lighting, or perspective distortion typical of single camera automated systems. This demonstration builds on the previously work published in CELA 2020’s proceedings, with improvements made in processing speed and bandwidth, testing exterior conditions, and more accurate geolocation and tracking. Additionally, output from the re-ID and mapping systems include confidence scores and frame-by-frame tracking for data validity testing and documentation. Results from the system in progress will be presented and associated opportunities and constraints will be discussed. 
    more » « less