%AZhang, Harry%AIchnowski, Jeffrey%AAvigal, Yahav%AGonzales, Joseph%AStoica, Ion%AGoldberg, Ken%Anull Ed.%D2020%I %K %MOSTI ID: 10221252 %PMedium: X %TDex-Net AR: Distributed Deep Grasp Planning Using a Commodity Cellphone and Augmented Reality App %XConsumer demand for augmented reality (AR) in mobile phone applications, such as the Apple ARKit. Such applications have potential to expand access to robot grasp planning systems such as Dex-Net. AR apps use structure from motion methods to compute a point cloud from a sequence of RGB images taken by the camera as it is moved around an object. However, the resulting point clouds are often noisy due to estimation errors. We present a distributed pipeline, DexNet AR, that allows point clouds to be uploaded to a server in our lab, cleaned, and evaluated by Dex-Net grasp planner to generate a grasp axis that is returned and displayed as an overlay on the object. We implement Dex-Net AR using the iPhone and ARKit and compare results with those generated with high-performance depth sensors. The success rates with AR on harder adversarial objects are higher than traditional depth images. Country unknown/Code not availablehttps://doi.org/10.1109/ICRA40945.2020.9197247OSTI-MSA