skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Asking great questions
The questions we ask and how we ask them will make a difference in how successful we are in meetings, in collaborations and in our careers as statisticians and data scientists. What makes a question good and what makes a good question great? Great questions elicit information useful for accomplishing the tasks of a project and strengthen the statistician–domain expert relationship. Great questions have three parts: the question, the answer and the paraphrasing of the answer to create shared understanding. We discuss three strategies for asking great questions: preface questions with statements about the intent behind asking the question; follow the question with behaviours and actions consistent with the prefaced words including actions such as listening, paraphrasing and summarizing; and model a collaborative relationship via the asking of a great question. We describe the methods and results of a study that shows how questions can be assessed, that statisticians can learn to ask great questions and that those who have learned this skill consider it to be valuable for their careers. We provide practical guidelines for learning how to ask great questions so that statisticians can improve their collaboration skills and thus increase their impact to help address societal challenges.  more » « less
Award ID(s):
1955109
PAR ID:
10380496
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Stat
Volume:
11
Issue:
1
ISSN:
2049-1573
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The questions we ask and the way in which we ask them can make all the difference in how successful we are in meetings, in collaborations, and in our careers as statisticians and data scientists. What makes a question good and what makes a good question great? In this paper, we develop a theory for asking great questions that elicit information useful for accomplishing the tasks of a collaborative project and also strengthen the statistician-domain expert relationship. We deconstruct asking great questions into three parts: the question, the answer, and the paraphrasing of the answer to create shared understanding. We discuss three strategies for asking great questions: preface questions with statements about the intent behind asking the question, follow the question with behaviors and actions consistent with the prefaced words including actions such as listening, paraphrasing, and summarizing; and model a collaborative relationship via the asking of a great question. We provide practical guidelines for learning these skills so that statisticians can improve their statistical collaboration skills and thus increase their impact to help address societal challenges. 
    more » « less
  2. In order be successful, engineers must ask their clients, coworkers, and bosses questions. Asking questions can improve work quality and make the asker appear smarter. However, people often hesitate to ask questions for fear of seeming incompetent or inferior. This study investigates: what characteristics and experiences are connected to engineering students’ perceptions of asking questions? We analyzed data from a survey of over a thousand engineering undergraduates across a nationally representative sample of 27 U.S. engineering schools. We focused on three dependent variables: question-asking self-efficacy (how confident students are in their ability to ask a lot of questions), social outcome expectations around asking questions (whether students believe if they ask a lot of questions, they will earn the respect of their colleagues), and career outcome expectations (whether they believe asking a lot of questions will hurt their chances for getting ahead at work). We were surprised to find that question-asking self-efficacy or outcome expectations did not significantly vary by gender, under-represented minority status, and school size. However, students with high question-asking self-efficacy and outcome expectations were more likely to have engaged in four extracurricular experiences: participating in an internship or co-op, conducting research with a faculty member, participating in a student group, and holding a leadership role in an organization or student group. The number of different types of these extracurricular activities a student engaged in correlated with question-asking self-efficacy and positive outcome expectations around asking questions. The results illustrate the relationship between extracurricular activities and students’ self-efficacy and behavior outcome expectations. The college experience is more than just formal academic classes. Students learn from experiences that occur after class or during the summer, and ideally these experiences complement class-derived skills and confidence in asking questions. 
    more » « less
  3. Users often fail to formulate their complex information needs in a single query. As a consequence, they need to scan multiple result pages and/or reformulate their queries, which is a frustrating experience. Alternatively, systems can improve user satisfaction by proactively asking questions from the users to clarify their information needs. Asking clarifying questions is especially important in information-seeking conversational systems, since they can only return a limited number (often only one) of results. In this paper, we formulate the task of asking clarifying questions in open-domain information retrieval. We propose an offline evaluation methodology for the task. In this research, we create a dataset, called Qulac, through crowdsourcing. Our dataset is based on the TREC Web Track 2009-2012 data and consists of over 10K question-answer pairs for 198 TREC topics with 762 facets. Our experiments on an oracle model demonstrate that asking only one good question leads to over 100% retrieval performance improvement, which clearly demonstrates the potential impact of the task. We further propose a neural model for selecting clarifying question based on the original query and the previous question-answer interactions. Our model significantly outperforms competitive baselines. To foster research in this area, we have made Qulac publicly available. 
    more » « less
  4. null (Ed.)
    Users often need to look through multiple search result pages or reformulate queries when they have complex information-seeking needs. Conversational search systems make it possible to improve user satisfaction by asking questions to clarify users’ search intents. This, however, can take significant effort to answer a series of questions starting with “what/why/how”. To quickly identify user intent and reduce effort during interactions, we propose an intent clarification task based on yes/no questions where the system needs to ask the correct question about intents within the fewest conversation turns. In this task, it is essential to use negative feedback about the previous questions in the conversation history. To this end, we propose a Maximum-Marginal-Relevance (MMR) based BERT model (MMR-BERT) to leverage negative feedback based on the MMR principle for the next clarifying question selection. Experiments on the Qulac dataset show that MMR-BERT outperforms state-of-the-art baselines significantly on the intent identification task and the selected questions also achieve significantly better performance in the associated document retrieval tasks. 
    more » « less
  5. The ability to identify one’s own confusion and to ask a question that resolves it is an essential metacognitive skill that supports self-regulation (Winne, 2005). Yet, while students receive substantial training in how to answer questions, little classroom time is spent training students how to ask good questions. Past research has shown that students are able to pose more high-quality questions after being instructed in a taxonomy for classifying the quality of their questions (Marbach‐Ad & Sokolove, 2000). As pilot data collection in preparation for a larger study funded through NSF-DUE, we provided engineering statics students training in writing high-quality questions to address their own confusions. The training emphasized the value of question-asking in learning and how to categorize questions using a simple taxonomy based on prior work (Harper et al., 2003). The taxonomy specifies five question levels: 1) an unspecific question, 2) a definition question, 3) a question about how to do something, 4) a why question, and 5) a question that extends knowledge to a new circumstance. At the end of each class period during a semester-long statics course, students were prompted to write and categorize a question that they believed would help them clarify their current point of greatest confusion. Through regular practice writing and categorizing such questions, we hoped to improve students' abilities to ask questions that require higher-level thinking. We collected data from 35 students in courses at two institutions. Over the course of the semester, students had the opportunity to write and categorize twenty of their own questions. After the semester, the faculty member categorized student questions using the taxonomy to assess the appropriateness of the taxonomy and whether students used it accurately. Analysis of the pilot data indicates three issues to be addressed: 1) Student compliance in writing and categorizing their questions varied. 2) Some students had difficulty correctly coding their questions using the taxonomy. 3) Some student questions could not be clearly characterized using the taxonomy, even for faculty raters. We will address each of these issues with appropriate refinements in our next round of data collection: 1) Students may have been overwhelmed with the request to write a question after each class period. In the future, we will require students to write and categorize at least one question per week, with more frequent questions encouraged. 2) To improve student use of the taxonomy in future data collection, students will receive more practice with the taxonomy when it is introduced and more feedback on their categorization of questions during the semester. 3) We are reformulating our taxonomy to accommodate questions that may straddle more than one category, such as a question about how to extend a mathematical operation to a new situation (which could be categorized as either a level 3 or 5). We are hopeful that these changes will improve accuracy and compliance, enabling us to use the intervention as a means to promote metacognitive regulation and measure changes as a result, which is the intent of the larger scope of the project. 
    more » « less