skip to main content

Search for: All records

Creators/Authors contains: "Liang, J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. CONTEXT For the last 40 years, the aggregate number of women receiving bachelor’s degrees in engineering in the US has remained stuck at approximately 20%. Research into this “disappointing state of affairs” has established that “the [educational] institutions in which women sought inclusion are themselves gendered, raced and classed” (Borrego, 2011; Riley et al., 2015; Tonso, 2007). PURPOSE Our focus is women students who thrive in undergraduate engineering student project teams. We need to learn more about how they describe becoming an engineer, about how women come to think of themselves as engineers and about how they perform their engineeringmore »selves, and how others come to identify them as engineers (Tonso, 2006). METHODS We are guided by a feminist, activist, and interpretive lens. Our multi-case study method, i.e., three semi-structured interviews and photovoice, offers two advantages: 1) the knowledge generated by case studies is concrete and context dependent (Case and Light, 2011); 2) case studies are useful in the heuristic identification of new variables and potential hypotheses (George and Bennett, 2005). ACTUAL OUTCOMES Our preliminary results suggest these women find joy in their experience of developing and applying engineering expertise to real, tangible, and challenging problems. They find knowing-about and knowing-how exciting, self-rewarding and self-defining. Further, these women work to transform the culture or ways of participating in project teams. This transforming not only facilitates knowing-about and knowing-how; but also it creates an environment in which women can claim their expertise, their identity as engineers, and have those expertise and identities affirmed by others. CONCLUSIONS If we aim to transform our gendered, raced, classed institutions, we need to learn more about women who thrive within those institutions. We need to learn more about the joy of doing engineering that these women experience. We also need to learn more about how they create an “integration-and-learning perspective” for themselves (Ely and Thomas, 2001) and a “climate for inclusion” within those project teams (Nishii, 2012), a perspective and climate that fosters the joy of doing engineering.« less
    Free, publicly-accessible full text available December 7, 2022
  2. As autonomous robots interact and navigate around real-world environments such as homes, it is useful to reliably identify and manipulate articulated objects, such as doors and cabinets. Many prior works in object articulation identification require manipulation of the object, either by the robot or a human. While recent works have addressed predicting articulation types from visual observations alone, they often assume prior knowledge of category-level kinematic motion models or sequence of observations where the articulated parts are moving according to their kinematic constraints. In this work, we propose FormNet, a neural network that identifies the articulation mechanisms between pairs ofmore »object parts from a single frame of an RGB-D image and segmentation masks. The network is trained on 100k synthetic images of 149 articulated objects from 6 categories. Synthetic images are rendered via a photorealistic simulator with domain randomization. Our proposed model predicts motion residual flows of object parts, and these flows are used to determine the articulation type and parameters. The network achieves an articulation type classification accuracy of 82.5% on novel object instances in trained categories. Experiments also show how this method enables generalization to novel categories and can be applied to real-world images without fine-tuning.« less
    Free, publicly-accessible full text available October 1, 2022
  3. Free, publicly-accessible full text available May 1, 2023
  4. Free, publicly-accessible full text available February 1, 2023
  5. Liu, W. ; Wang, Y. ; Guo, B. ; Tang, X. ; Zeng, S. (Ed.)
    15 O( α , γ ) 19 Ne is regarded as one of the most important thermonuclear reactions in type I X-ray bursts. For studying the properties of the key resonance in this reaction using β decay, the existing Proton Detector component of the Gaseous Detector with Germanium Tagging (GADGET) assembly is being upgraded to operate as a time projection chamber (TPC) at FRIB. This upgrade includes the associated hardware as well as software and this paper mainly focusses on the software upgrade. The full detector set up is simulated using the ATTPCROOTv 2 data analysis framework for 20 Mgmore »and 241 Am.« less
    Free, publicly-accessible full text available January 1, 2023
  6. Liu, W. ; Wang, Y. ; Guo, B. ; Tang, X. ; Zeng, S. (Ed.)
    Sensitivity studies have shown that the 15 O(α, γ) 19 Ne reaction is the most important reaction rate uncertainty affecting the shape of light curves from Type I X-ray bursts. This reaction is dominated by the 4.03 MeV resonance in 19 Ne. Previous measurements by our group have shown that this state is populated in the decay sequence of 20 Mg. A single 20 Mg(βp α) 15 O event through the key 15 O(α, γ) 19 Ne resonance yields a characteristic signature: the emission of a proton and alpha particle. To achieve the granularity necessary for the identification of thismore »signature, we have upgraded the Proton Detector of the Gaseous Detector with Germanium Tagging (GADGET) into a time projection chamber to form the GADGET II detection system. GADGET II has been fully constructed, and is entering the testing phase.« less
    Free, publicly-accessible full text available January 1, 2023
  7. Free, publicly-accessible full text available February 1, 2023
  8. Manipulation tasks can often be decomposed into multiple subtasks performed in parallel, e.g., sliding an object to a goal pose while maintaining con- tact with a table. Individual subtasks can be achieved by task-axis controllers defined relative to the objects being manipulated, and a set of object-centric controllers can be combined in an hierarchy. In prior works, such combinations are defined manually or learned from demonstrations. By contrast, we propose using reinforcement learning to dynamically compose hierarchical object-centric controllers for manipulation tasks. Experiments in both simulation and real world show how the proposed approach leads to improved sample efficiency, zero-shotmore »generalization to novel test environments, and simulation-to-reality transfer with- out fine-tuning.« less