skip to main content

Search for: All records

Creators/Authors contains: "Kong, S"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available January 1, 2023
  2. Monocular depth predictors are typically trained on large-scale training sets which are naturally biased w.r.t the distribution of camera poses. As a result, trained predic- tors fail to make reliable depth predictions for testing exam- ples captured under uncommon camera poses. To address this issue, we propose two novel techniques that exploit the camera pose during training and prediction. First, we in- troduce a simple perspective-aware data augmentation that synthesizes new training examples with more diverse views by perturbing the existing ones in a geometrically consis- tent manner. Second, we propose a conditional model that exploits the per-image camera pose as prior knowledge by encoding it as a part of the input. We show that jointly ap- plying the two methods improves depth prediction on im- ages captured under uncommon and even never-before-seen camera poses. We show that our methods improve perfor- mance when applied to a range of different predictor ar- chitectures. Lastly, we show that explicitly encoding the camera pose distribution improves the generalization per- formance of a synthetically trained depth predictor when evaluated on real images.
  3. We present a method for establishing confidence in the decisions of an autonomous car which accounts for errors not only in control but also in perception. The key idea is that the controller generates a certificate, which is a kind its proposed action is safe. of proof that its interpretation of the scene is accurate and its proposed action is safe. Checking the certificate is faster and simpler than generating it, which allows for a monitor that comprises a much smaller trusted base than the system as a whole. Simulation experiments suggest that the approach is practical.
  4. null (Ed.)
  5. Computer science (CS) has the potential to positively impact the economic well-being of those who pursue it, and the lives of those who benefit from its innovations. Yet, large CS learning opportunity gaps exist for students from historically underrepresented populations. The Computer Science for All (CS for All) movement has brought nationwide attention to these inequities in CS education. More recently, financial support for research-practice partnerships (RPPs) has increased to address these disparities because such collaborations can yield more relevant research for immediate educational/practical application. However, for initiatives to effectively engage in equity-focused initiatives toward making computing inclusive, partnership members need to begin with a shared definition of equity to which all are accountable. This poster takes a critical look at the development of a collaboratively developed definition of equity and its application in a CS for All RPP of university researchers and administrators from local education agencies across the state of California. Details are shared about how the RPP collectively defined equity and how that definition evolved and informed the larger project’s work with school administrators/educators.
  6. Certified control is a new architectural pattern for achieving high assurance of safety in autonomous cars. As with a traditional safety controller or interlock, a separate component oversees safety and intervenes to prevent safety violations. This component (along with sensors and actuators) comprises a trusted base that can ensure safety even if the main controller fails. But in certified control, the interlock does not use the sensors directly to determine when to intervene. Instead, the main controller is given the responsibility of presenting the interlock with a certificate that provides evidence that the proposed next action is safe. The interlock checks this certificate, and intervenes only if the check fails. Because generating such a certificate is usually much harder than checking one, the interlock can be smaller and simpler than the main controller, and thus assuring its correctness is more feasible.