skip to main content


Title: HYPER: A Hybrid High-Performance Framework for Network Function Virtualization
Network function virtualization (NFV) offers the potential for both enhancing service delivery flexibility and reducing overall costs by virtualizing network functions that are traditionally implemented in dedicated hardware. However, the flexibility of NFV comes with considerable compromises since virtual machine carried functions could introduce significant performance overhead. In this paper, we present a novel high-performance framework called HYPER, which combines programmable hardware infrastructure and traditional software infrastructure in NFV to achieve both high performance and flexibility for supporting virtualized network functions (VNFs). In HYPER, we design a mediator layer to hide underlying infrastructure heterogeneity from the NFV orchestrator to simplify VNF management. In addition, we design a SLA-aware service chaining algorithm in HYPER to leverage the benefits of the hybrid infrastructure to fulfill both functional and performance requirements from service subscribers (or tenants). To optimize resource utilization efficiency, we also introduce a performance-aware VNF placement algorithm in HYPER, which accommodates both resource and performance requirements in placing VNFs. We implement HYPER in a testbed based on OpenStack and ONetCard. Experimental results show that HYPER reduces the forwarding latency of a service chain by 40% to 67% compared with data plane development kit -based implementation, while maintaining the flexibility of VNF management.  more » « less
Award ID(s):
1642143
NSF-PAR ID:
10047715
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Journal on Selected Areas in Communications
Volume:
35
Issue:
11
ISSN:
0733-8716
Page Range / eLocation ID:
2490 - 2500
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we consider the challenges that arise from the need to scale virtualized network functions (VNFs) at 100 Gbps line speed and beyond. Traditional VNF designs are monolithic in state management and scheduling: internally maintaining all states and operations associated with them. Without proper design considerations, it suffers from limitations when scaling at 100 Gbps link speed and beyond: the inability of efficient utilization of the cache because of the contention due to the frequent control plane activities, computational/memory-intensive tasks taking up CPU times, shares states causing the synchronization among the cores. We address these limitations by arguing for the need to granularly decompose a VNF into data/control components that are co-located within a server but can be independently scaled among the cores. To realize the approach, we design a "serverless" programming framework with novel abstraction to optimize the data components that must process packets at the line speed, reduce the contention of the data states and enable run-time scheduling of different components for improved resource utilization. The abstractions, combined with the runtime system that we design, help NFV developers focus on the logic and correctness of VNF programming without worrying about how VNFs may be scaled in or out. We evaluate our platform by comparing it with monolithic approaches using different workloads and by analyzing its advantages of separation on scalability, performance determinism, and feature velocity. 
    more » « less
  2. Resource flexing is the notion of allocating resources on-demand as workload changes. This is a key advantage of Virtualized Network Functions (VNFs) over their non-virtualized counterparts. However, it is difficult to balance the timeliness and resource efficiency when making resource flexing decisions due to unpredictable workloads and complex VNF processing logic. In this work, we propose an Elastic resource flexing system for Network functions VIrtualization (ENVI) that leverages a combination of VNF-level features and infrastructure-level features to construct a neural-network-based scaling decision engine for generating timely scaling decisions. To adapt to dynamic workloads, we design a window-based rewinding mechanism to update the neural network with emerging workload patterns and make accurate decisions in real time. Our experimental results for real VNFs (IDS Suricata and caching proxy Squid) using workloads generated based on real-world traces, show that ENVI provisions significantly fewer (up to 26%) resources without violating service level objectives, compared to commonly used rule-based scaling policies. 
    more » « less
  3. null (Ed.)
    Network function virtualization (NFV) technologyattracts tremendous interests from telecommunication industryand data center operators, as it allows service providers to assignresource for Virtual Network Functions (VNFs) on demand,achieving better flexibility, programmability, and scalability. Toimprove server utilization, one popular practice is to deploy besteffort (BE) workloads along with high priority (HP) VNFs whenhigh priority VNF’s resource usage is detected to be low. The keychallenge of this deployment scheme is to dynamically balancethe Service level objective (SLO) and the total cost of ownership(TCO) to optimize the data center efficiency under inherentlyfluctuating workloads. With the recent advancement in deepreinforcement learning, we conjecture that it has the potential tosolve this challenge by adaptively adjusting resource allocationto reach the improved performance and higher server utilization.In this paper, we present a closed-loop automation systemRLDRM1to dynamically adjust Last Level Cache allocationbetween HP VNFs and BE workloads using deep reinforcementlearning. The results demonstrate improved server utilizationwhile maintaining required SLO for the HP VNFs. 
    more » « less
  4. Network Function Virtualization (NFV) migrates network functions from proprietary hardware to commercial servers on the edge or cloud, making network services more cost-efficient, manage-convenient, and flexible. To facilitate these advantages, it is critical to find an optimal deployment of the chained virtual network functions, i.e. service function chains (SFCs), in hybrid edge-and-cloud environment, considering both resource and latency. It is an NP-hard problem. In this paper, we first limit the problem at the edge and design a constant approximation algorithm named chained next fit (CNF), where a sub-algorithm called double spanning tree (DST) is designed to deal with virtual network embedding. Then we take both cloud and edge resources into consideration and create a promotional algorithm called decreasing sorted, chained next fit (DCNF), which also has a provable constant approximation ratio. The simulation results demonstrate that the ratio between DCNF and the optimal solution is much smaller than the theoretical bound, approaching an average of 1.25. Moreover, DCNF always has a better performance than the benchmarks, which implies that it is a good candidate for joint resource and latency optimization in hybrid edge-and-cloud networks. 
    more » « less
  5. Network Function Virtualization (NFV) emerges as a promising paradigm with the potential for cost-efficiency, manage-convenience, and flexibility, where the service function chain (SFC) deployment scheme is a crucial technology. In this paper, we propose an Ant Colony Optimization (ACO) meta-heuristic algorithm for the Online SFC Deployment, called ACO-OSD, with the objectives of jointly minimizing the server operation cost and network latency. As a meta-heuristic algorithm, ACO-OSD performs better than the state-of-art heuristic algorithms, specifically 42.88% lower total cost on average. To reduce the time cost of ACO-OSD, we design two acceleration mechanisms: the Next-Fit (NF) strategy and the many-to-one model between SFC deployment schemes and ant-tours. Besides, for the scenarios requiring real-time decisions, we propose a novel online learning framework based on the ACO-OSD algorithm, called prior-based learning real-time placement (PLRP). It realizes near real-time SFC deployment with the time complexity of O(n), where n is the total number of VNFs of all newly arrived SFCs. It meanwhile maintains a performance advantage with 36.53% lower average total cost than the state-of-art heuristic algorithms. Finally, we perform extensive simulations to demonstrate the outstanding performance of ACO-OSD and PLRP compared with the benchmarks. 
    more » « less