%AMittal, Viyom%AQi, Shixiong%ABhattacharya, Ratnadeep%ALyu, Xiaosu%ALi, Junfeng%AKulkarni, Sameer%ALi, Dan%AHwang, Jinho%ARamakrishnan, K.%AWood, Timothy%D2021%I %K %MOSTI ID: 10313278 %PMedium: X %TMu: An Efficient, Fair and Responsive Serverless Framework for Resource-Constrained Edge Clouds %XServerless computing platforms simplify development, deployment, and automated management of modular software functions. However, existing serverless platforms typically assume an over-provisioned cloud, making them a poor fit for Edge Computing environments where resources are scarce. In this paper we propose a redesigned serverless platform that comprehensively tackles the key challenges for serverless functions in a resource constrained Edge Cloud. Our Mu platform cleanly integrates the core resource management components of a serverless platform: autoscaling, load balancing, and placement. Each worker node in Mu transparently propagates metrics such as service rate and queue length in response headers, feeding this information to the load balancing system so that it can better route requests, and to our autoscaler to anticipate workload fluctuations and proactively meet SLOs. Data from the Autoscaler is then used by the placement engine to account for heterogeneity and fairness across competing functions, ensuring overall resource efficiency, and minimizing resource fragmentation. We implement our design as a set of extensions to the Knative serverless platform and demonstrate its improvements in terms of resource efficiency, fairness, and response time. Evaluating Mu, shows that it improves fairness by more than 2x over the default Kubernetes placement engine, improves 99th percentile response times by 62% through better load balancing, reduces SLO violations and resource consumption by pro-active and precise autoscaling. Mu reduces the average number of pods required by more than ~15% for a set of real Azure workloads. Country unknown/Code not availablehttps://doi.org/10.1145/3472883.3487014OSTI-MSA