<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Color-based Lightweight Utility-aware Load Shedding for Real-Time Video Analytics at the Edge</dc:title><dc:creator>Gupta, Harshit; Saurez, Enrique; Röger, Henriette; Bhowmik, Sukanya; Ramachandran, Umakishore; Rothermel, Kurt</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Real-time video analytics typically require video frames to be processed by a query to identify objects or activities of interest while adhering to an end-to-end frame processing latency constraint. This imposes a continuous and heavy load on backend compute and network infrastructure. Video data has inherent redundancy and does not always contain an object of interest for a given query. We leverage this property of video streams to propose a lightweight Load Shedder that can be deployed on edge servers or on inexpensive edge devices co-located with cameras. The proposed Load Shedder uses pixel-level color-based features to calculate a utility score for each ingress video frame and a minimum utility threshold to select interesting frames to send for query processing. Dropping unnecessary frames enables the video analytics query in the backend to meet the end-to-end latency constraint with fewer compute and network resources. To guarantee a bounded end-to-end latency at runtime, we introduce a control loop that monitors the backend load and dynamically adjusts the utility threshold. Performance evaluations show that the proposed Load Shedder selects a large portion of frames containing each object of interest while meeting the end-to-end frame processing latency constraint. Furthermore, it does not impose a significant latency overhead when running on edge devices with modest compute resources.</dc:description><dc:publisher>ACM</dc:publisher><dc:date>2024-06-24</dc:date><dc:nsf_par_id>10553456</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>123 to 134</dc:page_range_or_elocation><dc:issn/><dc:isbn>9798400704437</dc:isbn><dc:doi>https://doi.org/10.1145/3629104.3666037</dc:doi><dcq:identifierAwardId>2008368</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location>Villeurbanne France</dc:location><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>