We present DeepPicar, a low-cost deep neural network based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses the same network architecture-9 layers, 27 million connections and 250K parameters-and can drive itself in real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN-based real-time control workload. We find that all tested platforms, including the Pi 3, are capable of supporting the CNN-based real-time control, from 20 Hz up to 100 Hz, depending on hardware platform. However, we find that shared resource contention remains an important issue that must be considered in applying CNN models on shared memory based embedded computing platforms; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution. 
                        more » 
                        « less   
                    
                            
                            Encoding Consistency: Optimizing Self-Driving Reliability With Real-Time Speed Data
                        
                    
    
            Self-driving cars can revolutionize transportation systems, o!ering the potential to significantly enhance efficiency while also addressing the critical issue of human fatalities on roadways. Hence, there is a need to investigate methods to enhance self-driving technologies through end-to-end learning techniques. In this paper, we investigate methodologies that integrate Convolutional Neural Networks (CNNs) to enhance self-driving consistency through real-time velocity and steering estimation. We extend an end-to-end state-of-the-art learning solution with real-time speed data as additional model input to refine reliability. Specifically, our work integrates an optical encoder sensor system to record car speed during training data collection, ensuring the throttle can be regulated during model inference. An end-to-end experimental testbed is deployed on the Chameleon cloud using CHI@Edge infrastructure to manage a 1:18 scaled car, equipped with a Raspberry Pi as its onboard computer. Finally, we provide guidance that facilitates reproducibility and highlight the challenges and limitations of supporting such experiments. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2027170
- PAR ID:
- 10560398
- Publisher / Repository:
- Proceedings of the 4th Workshop on Flexible Resource and Application Management on the Edge
- Date Published:
- Format(s):
- Medium: X
- Location:
- Pisa, Italy
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Accurate 3D object detection in real-world environments requires a huge amount of annotated data with high quality. Acquiring such data is tedious and expensive, and often needs repeated effort when a new sensor is adopted or when the detector is deployed in a new environment. We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector. For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area. This setting is label-efficient, sensor-agnostic, and communication-efficient: nearby units only need to share the predictions with the ego agent (e.g., car). Naively using the received predictions as ground-truths to train the detector for the ego car, however, leads to inferior performance. We systematically study the problem and identify viewpoint mismatches and mislocalization (due to synchronization and GPS errors) as the main causes, which unavoidably result in false positives, false negatives, and inaccurate pseudo labels. We propose a distance-based curriculum, first learning from closer units with similar viewpoints and subsequently improving the quality of other units' predictions via self-training. We further demonstrate that an effective pseudo label refinement module can be trained with a handful of annotated data, largely reducing the data quantity necessary to train an object detector. We validate our approach on the recently released real-world collaborative driving dataset, using reference cars' predictions as pseudo labels for the ego car. Extensive experiments including several scenarios (e.g., different sensors, detectors, and domains) demonstrate the effectiveness of our approach toward label-efficient learning of 3D perception from other units' predictions.more » « less
- 
            null (Ed.)Objective We measured how long distraction by a smartphone affects simulated driving behaviors after the tasks are completed (i.e., the distraction hangover). Background Most drivers know that smartphones distract. Trying to limit distraction, drivers can use hands-free devices, where they only briefly glance at the smartphone. However, the cognitive cost of switching tasks from driving to communicating and back to driving adds an underappreciated, potentially long period to the total distraction time. Method Ninety-seven 21- to 78-year-old individuals who self-identified as active drivers and smartphone users engaged in a simulated driving scenario that included smartphone distractions. Peripheral-cue and car-following tasks were used to assess driving behavior, along with synchronized eye tracking. Results The participants’ lateral speed was larger than baseline for 15 s after the end of a voice distraction and for up to 25 s after a text distraction. Correct identification of peripheral cues dropped about 5% per decade of age, and participants from the 71+ age group missed seeing about 50% of peripheral cues within 4 s of the distraction. During distraction, coherence with the lead car in a following task dropped from 0.54 to 0.045, and seven participants rear-ended the lead car. Breadth of scanning contracted by 50% after distraction. Conclusion Simulated driving performance drops dramatically after smartphone distraction for all ages and for both voice and texting. Application Public education should include the dangers of any smartphone use during driving, including hands-free.more » « less
- 
            Self-driving cars relying solely on ego-centric perception face limitations in sensing, often failing to detect occluded, faraway objects. Collaborative autonomous driving (CAV) seems like a promising direction, but collecting data for development is non-trivial. It requires placing multiple sensor-equipped agents in a real-world driving scene, simultaneously! As such, existing datasets are limited in locations and agents. We introduce a novel surrogate to the rescue, which is to generate realistic perception from different viewpoints in a driving scene, conditioned on a real-world sample—the ego-car’s sensory data. This surrogate has huge potential: it could potentially turn any ego-car dataset into a collaborative driving one to scale up the development of CAV. We present the very first solution, using a combination of simulated collaborative data and real ego-car data. Our method Transfer Your Perspective (TYP) learns a conditioned diffusion model whose output samples are not only realistic but also consistent in both semantics and layouts with the given ego-car data. Empirical results demonstrate TYP’s effectiveness in aiding in a CAV setting. In particular, TYP enables us to (pre-)train collaborative perception algorithms like early and late fusion with little or no real-world collaborative data, greatly facilitating downstream CAV applications.more » « less
- 
            When machine learning (ML) algorithms are used in mission-critical domains (e.g., self-driving cars, cyber security) or life-critical domains (e.g., surgical robotics), it is often important to ensure that the learned models satisfy some high-level correctness requirements — these requirements can be instantiated in particular domains via constraints like safety (e.g., a robot arm should not come within five meters of any human operator during any phase of performing an autonomous operation) or liveness (e.g., a car should eventually cross a 4-way intersection). Such constraints can be formally described in propositional logic, first order logic or temporal logics such as Probabilistic Computation Tree Logic (PCTL)[31]. For example, in a lane change controller we can enforce the following PCTL safety property on seeing a slow-moving truck in front: Pr>0.99[F(changedLane or reducedSpeed)] , where F is the eventually operator in PCTL logic — this property states that the car should eventually change lanes or reduce speed with high probability (greater than 0.99). Trusted Machine Learning (TML) refers to a learning methodology that ensures that the specified properties are satisfied.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    