IoT systems require a wireless infrastructure that supports 5G devices, including handovers between heterogeneous and/or small cell radio access networks. These networks are subject to increased radio link failures and loss of IoT network function. 3GPP new radio (NR) applications include multihoming, i.e., simultaneously connecting devices, and handover, i.e., changing the point of access to the network. This work leverages the open radio access network (O-RAN) alliance, which specifies a new open architecture with intelligent controllers, to improve handover management. A new feedback-based time-to-trigger (TTT) handover mechanism is introduced. Improved throughput and reduced radio link failures over other techniques were achieved.
more »
« less
Demystifying Secondary Radio Access Failures in 5G
In this work, we have conducted a measurement study with three US operators to reveal three types of problematic failure handling on secondary radio access which have not been reported before. Compared to primary radio access failures, secondary radio access failures do not hurt radio access availability but significantly impact data performance, particularly when 5G is used as secondary radio access to boost throughput. Improper failure handling results in significant throughput loss, which is unnecessary in most instances. Datasets are available at https:// github.com/mssn/ scgfailure.
more »
« less
- Award ID(s):
- 1750953
- PAR ID:
- 10512987
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- HOTMOBILE '24: Proceedings of the 25th International Workshop on Mobile Computing Systems and Applications
- ISBN:
- 9798400704970
- Page Range / eLocation ID:
- 114 to 120
- Format(s):
- Medium: X
- Location:
- San Diego CA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We analyze how file systems and modern data-intensive applications react to fsync failures. First, we characterize how three Linux file systems (ext4, XFS, Btrfs) behave in the presence of failures. We find commonalities across file systems (pages are always marked clean, certain block writes always lead to unavailability), as well as differences (page content and failure reporting is varied). Next, we study how five widely used applications (PostgreSQL, LMDB, LevelDB, SQLite, Redis) handle fsync failures. Our findings show that although applications use many failure-handling strategies, none are sufficient: fsync failures can cause catastrophic outcomes such as data loss and corruption. Our findings have strong implications for the design of file systems and applications that intend to provide strong durability guarantees.more » « less
-
In IoT deployments, it is often necessary to replicate data in failure-prone and resource-constrained computing environments to meet the data availability requirements of smart applications. In this paper, we evaluate the impact of correlated failures on an off-the-shelf probabilistic replica placement strategy for IoT systems via trace-driven simulation. We extend this strategy to handle both correlated failures as well as resource scarcity by estimating the amount of storage capacity required to meet data availability requirements. These advancements lay the foundation for building computing systems that are capable of handling the unique challenge of reliable data access in low-resource environments.more » « less
-
Cognitive radio networks, a.k.a. dynamic spectrum access networks, offer a promising solution to the problems of spectrum scarcity and under-utilization. In this paper, we consider two single-user links: primary and secondary links. To increase secondary user (SU) transmission opportunities and increase primary user (PU) throughput, we consider a cognitive relay network where a SU relays PU packets that are unsuccessfully received at the primary receiver (PR). At the PR side, two protocols are suggested: i) energy accumulation (EA), and ii) mutual-information accumulation (MIA). The average stable throughput of the secondary link is derived under these protocols for a specific throughput selected by the primary link. Results show that EA and MIA can significantly improve the secondary throughput compared with the no accumulation scenario, especially under extreme environment.more » « less
-
Failure analysis of microelectronics is essential to identify the root cause of a device’s failure and prevent future failures. This process often requires removing material from the device sample to reach the region of interest, which can be done through various destructive methods, such as mechanical polishing, chemical etching, focused ion beam milling, and laser machining. Among these, laser machining offers a unique combination of speed, precision, and controllability to achieve a high-throughput, highly targeted material removal. In using lasers for processing of microelectronic samples, a much-desired capability is automated endpointing which is crucial for minimizing manual checks and improving the overall process throughput. In this paper, we propose to integrate laser-induced breakdown spectroscopy (LIBS), as a fast and high-precision material detection and process control means, into an ultrashort pulsed laser machining system, to enable vertical endpointing for sample preparation and failure analysis of microelectronics. The capabilities of the proposed system have been demonstrated through several sample processing examples.more » « less
An official website of the United States government

