Natural-language interaction between passengers and autonomous vehicles is essential for trust, safety, and user experience, but deploying Large Language Models (LLMs) on automotive edge platforms is constrained by compute, memory, energy, and privacy. We present Pi-talk, an edge-only system that enables real-time passenger–vehicle dialogue using a Small Language Model (SLM) running entirely on embedded hardware. Pi-talk performs multimodal fusion of onboard camera, ultrasonic distance, and navigation context via a lightweight encoder–adapter module that aligns modalities into compact semantic tokens for a pre-trained SLM. The SLM produces context-aware explanations of driving decisions, route options, and situational updates without cloud connectivity. Safety is enforced through a real-time safety envelope that gates responses and actions using distance thresholds and timing constraints. We further adapter-tune the SLM (on-device or offline) and deploy it with INT8 quantization and an Open Neural Network Exchange (ONNX) runtime to achieve efficient batch = 1 inference on Raspberry-Pi–class hardware. We evaluate task quality (evaluation loss), end-to-end latency, CPU utilization, and memory footprint, and include ablations contrasting unimodal vs. fused inputs. Results show that Pi-talk sustains few-second, edge-only inference while meeting stringent resource and latency limits and maintaining the safety envelope required for autonomous operation. To our knowledge, Pi-talk is among the first edgeonly, multimodal passenger–vehicle dialogue systems that both fine-tune and run a small language model entirely on Raspberry Pi–class, CPU-only hardware with an explicit while enforcing a runtime safety envelope.
more »
« less
This content will become publicly available on August 6, 2026
Design and Development of a Real-Time Camera-Based Smart Cooking Assistant
Personalized cooking recipe recommendation systems offer the potential to improve dietary choices for unhoused individuals and those transitioning out of homelessness. However, existing systems often neglect the needs of users with minimal cooking experience, providing little guidance during meal preparation. This study proposes the development of an intelligent cooking assistant system designed to offer realtime, step-by-step support throughout the cooking process. The system integrates a Raspberry Pi 5 mini-computer with a Raspberry Pi AI HAT+ (AI HAT+) and Raspberry Pi AI Camera (AI Camera), strategically mounted above the cooking area to continuously monitor culinary activity. At its core, the assistant utilizes a deep learning image classification model built on Ultralytics’ You Only Look Once version 11 (YOLO11) framework, trained on a curated dataset of 1,339 images collected during the preparation of chicken teriyaki and pasta dishes. The model achieved 100% precision and 99% recall of identifying all six cooking states utilized in this work, resulting in an average confidence accuracy of 91% during real-time tests. The system is intended to enable greater culinary independence among individuals with little cooking experience, such as those affected by long-term homelessness.
more »
« less
- Award ID(s):
- 2125654
- PAR ID:
- 10647109
- Publisher / Repository:
- IEEE
- Date Published:
- Page Range / eLocation ID:
- 361 to 366
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Occupancy detection systems are commonly equipped with high quality cameras and a processor with high computational power to run detection algorithms. This paper presents a human occupancy detection system that uses battery-free cameras and a deep learning model implemented on a low-cost hub to detect human presence. Our low-resolution camera harvests energy from ambient light and transmits data to the hub using backscatter communication. We implement the state-of-the-art YOLOv5 network detection algorithm that offers high detection accuracy and fast inferencing speed on a Raspberry Pi 4 Model B. We achieve an inferencing speed of ∼100ms per image and an overall detection accuracy of >90% with only 2GB CPU RAM on the Raspberry Pi. In the experimental results, we also demonstrate that the detection is robust to noise, illuminance, occlusion, and angle of depression.more » « less
-
Flood detection is difficult in rural areas with little or no monitoring infrastructure. Smaller streams and flood-prone regions often remain unmonitored, which leaves communities vulnerable. Commercial systems cost much and use proprietary designs, so many communities cannot use them. This work presents AquaCam, a low-cost and open-source flood detection system that uses a Raspberry Pi and a camera to measure stream water levels automatically. AquaCam captures images and trains a lightweight convolutional neural network (YOLOv8) with the collected data. The model learns to recognize water in natural backgrounds and measure water height. To test whether AquaCam can adapt to new environments, we evaluated the trained model at a different site with no retraining. The system still identified water levels accurately. This shows that the approach is practical and generalizable. AquaCam moves flood detection toward being affordable, accessible, and adaptable for the communities that need it.more » « less
-
As cybersecurity and AI become increasingly important, introducing these subjects to younger learners is critical. However, limited attention spans pose challenges for primary and secondary school students when learning these complex topics. By employing tools such as drones and Raspberry Pis, students can actively engage in learning cybersecurity and AI knowledge. This paper investigates the instructional benefits of Raspberry Pi and Drone platforms in K-12 education. The integration of hands-on activities through Raspberry Pi and Drone for Cybersecurity and AI content was implemented and evaluated at GenCyber summer camps in Michigan Technological University. The findings highlight the GoPiGo drone and Raspberry Pi as efficient instructional tools for teaching cybersecurity and AI to this age group. Additionally, hands-on tasks are essential for reinforcing understanding and maintaining student interest.more » « less
An official website of the United States government
