skip to main content


Title: Evaluating LTE Coverage and Quality from an Unmanned Aircraft System
Despite widespread LTE adoption and dependence, rural areas lag behind in coverage availability and quality. In the United States, while the Federal Communications Commission (FCC), which regulates mobile broadband, reports increases in LTE availability, the most recent FCC Broadband Report was criticized for overstating coverage. Physical assessments of cellular coverage and quality are essential for evaluating actual user experience. However, measurement campaigns can be resource, time, and labor intensive; more scalable measurement strategies are urgently needed. In this work, we first present several measurement solutions to capture LTE signal strength measurements, and we compare their accuracy. Our findings reveal that simple, lightweight spectrum sensing devices have comparable accuracy to expensive solutions and can estimate quality within one gradation of accuracy when compared to user equipment. We then show that these devices can be mounted on Unmanned Aircraft Systems (UAS) to more rapidly and easily measure coverage across wider geographic regions. Our results show that the low-cost aerial measurement techniques have 72% accuracy relative to the ground readings of user equipment, and fall within one quality gradation 98% of the time.  more » « less
Award ID(s):
1831698
NSF-PAR ID:
10192169
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Mobile Ad hoc and Smart Systems (IEEE MASS)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract: With the proliferation of Dynamic Spectrum Access (DSA), Internet of Things (IoT), and Mobile Edge Computing (MEC) technologies, various methods have been proposed to deduce key network and user information in cellular systems, such as available cell bandwidths, as well as user locations and mobility. Not only is such information dominated by cellular networks of vital significance on other systems co-located spectrum-wise and/or geographically, but applications within cellular systems can also benefit remarkably from inferring such information, as exemplified by the endeavours made by video streaming to predict cell bandwidth. Hence, we are motivated to develop a new tool to uncover as much information used to be closed to outsiders or user devices as possible with off-the-shelf products. Given the wide-spread deployment of LTE and its continuous evolution to 5G, we design and implement U-CIMAN, a client-side system to accurately UnCover as much Information in Mobile Access Networks as allowed by LTE encryption. Among the many potential applications of U-CIMAN, we highlight one use case of accurately measuring the spectrum tenancy of a commercial LTE cell. Besides measuring spectrum tenancy in unit of resource blocks, U-CIMAN discovers user mobility and traffic types associated with spectrum usage through decoded control messages and user data bytes. We conduct 4-month detailed accurate spectrum measurement on a commercial LTE cell, and the observations include the predictive power of Modulation and Coding Scheme on spectrum tenancy, and channel off-time bounded under 10 seconds, to name a few. 
    more » « less
  2. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  3. Millimeter-wave (mmWave) communications have been regarded as one of the most promising solutions to deliver ultra-high data rates in wireless local-area networks. A significant barrier to delivering consistently high rate performance is the rapid variation in quality of mmWave links due to blockages and small changes in user locations. If link quality can be predicted in advance, proactive resource allocation techniques such as link-quality-aware scheduling can be used to mitigate this problem. In this paper, we propose a link quality prediction scheme based on knowledge of the environment. We use geometric analysis to identify the shadowed regions that separate LoS and NLoS scenarios, and build LoS and NLoS link-quality predictors based on an analytical model and a regression-based approach, respectively. For the more challenging NLoS case, we use a synthetic dataset generator with accurate ray tracing analysis to train a deep neural network (DNN) to learn the mapping between environment features and link quality. We then use the DNN to efficiently construct a map of link quality predictions within given environments. Extensive evaluations with additional synthetically generated scenarios show a very high prediction accuracy for our solution. We also experimentally verify the scheme by applying it to predict link quality in an actual 802.11ad environment, and the results show a close agreement between predicted values and measurements of link quality. 
    more » « less
  4. Augmented Reality (AR) has been widely hailed as a representative of ultra-high bandwidth and ultra-low latency apps that will be enabled by 5G networks. While single-user AR can perform AR tasks locally on the mobile device, multi-user AR apps, which allow multiple users to interact within the same physical space, critically rely on the cellular network to support user interactions. However, a recent study showed that multi-user AR apps can experience very high end-to-end latency when running over LTE, rendering user interaction practically infeasible. In this paper, we study whether 5G mmWave, which promises significant bandwidth and latency improvements over LTE, can support multi-user AR by conducting an in-depth measurement study of the same popular multi-user AR app over both LTE and 5G mmWave. Our measurement and analysis show that: (1) The E2E AR latency over LTE is significantly lower compared to the values reported in the previous study. However, it still remains too high for practical user interaction. (2) 5G mmWave brings no benefits to multi-user AR apps. (3) While 5G mmWave reduces the latency of the uplink visual data transmission, there are other components of the AR app that are independent of the network technology and account for a significant fraction of the E2E latency. (4) The app drains 66% more network energy, which translates to 28% higher total energy over 5G mmWave compared to over LTE. 
    more » « less
  5. null (Ed.)
    The Nankai Trough Seismogenic Zone Experiment (NanTroSEIZE) is a coordinated, multiexpedition International Ocean Discovery Program (IODP) drilling project designed to investigate fault mechanics and seismogenesis along subduction megathrusts through direct sampling, in situ measurements, and long-term monitoring in conjunction with allied laboratory and numerical modeling studies. The fundamental scientific objectives of the NanTroSEIZE drilling project include characterizing the nature of fault slip and strain accumulation, fault and wall rock composition, fault architecture, and state variables throughout the active plate boundary system. IODP Expedition 365 is part of NanTroSEIZE Stage 3, with the following primary objectives: (1) retrieval of a temporary observatory at Site C0010 that has been monitoring temperature and pore pressure within the major splay thrust fault (termed the “megasplay”) at 400 meters below seafloor since November 2010 and (2) deployment of a complex long-term borehole monitoring system (LTBMS) that will be connected to the Dense Oceanfloor Network System for Earthquakes and Tsunamis (DONET) seafloor cabled observatory network postexpedition (anticipated June 2016). The LTBMS incorporates multilevel pore pressure sensing, a volumetric strainmeter, tiltmeter, geophone, broadband seismometer, accelerometer, and thermistor string. Together with an existing observatory at Integrated Ocean Drilling Program Site C0002 and a possible future installation near the trench, the Site C0010 observatory will allow monitoring within and above regions of contrasting behavior of the megasplay fault and the plate boundary as a whole. These include a site above the updip edge of the locked zone (Site C0002), a shallow site in the megasplay fault zone and its footwall (Site C0010), and a site at the tip of the accretionary prism (Integrated Ocean Drilling Program Site C0006). Together, this suite of observatories has the potential to capture deformation spanning a wide range of timescales (e.g., seismic and microseismic activity, slow slip, and interseismic strain accumulation) across a transect from near-trench to the seismogenic zone. Site C0010 is located 3.5 km along strike to the southwest of Integrated Ocean Drilling Program Site C0004. The site was drilled and cased during Integrated Ocean Drilling Program Expedition 319, with casing screens spanning a ~20 m interval that includes the megasplay fault, and suspended with a temporary instrument package (a “SmartPlug”). During Integrated Ocean Drilling Program Expedition 332 in late 2010, the instrument package was replaced with an upgraded sensor package (the “GeniusPlug”), which included pressure and temperature sensors and a set of geochemical and biological experiments. Expedition 365 achieved its primary scientific and operational objectives, including recovery of the GeniusPlug with a >5 y record of pressure and temperature conditions within the shallow megasplay fault zone, geochemical samples, and its in situ microbial colonization experiment; and installation of the LTBMS. The pressure records from the GeniusPlug include high-quality records of formation and seafloor responses to multiple fault slip events, including the 11 March 2011 Tohoku M9 and 1 April 2016 Mie-ken Nanto-oki M6 earthquakes. The geochemical sampling coils yielded in situ pore fluids from the splay fault zone, and microbes were successfully cultivated from the colonization unit. The complex sensor array, in combination with the multilevel hole completion, is one of the most ambitious and sophisticated observatory installations in scientific ocean drilling (similar to that in Hole C0002G, deployed in 2010). Overall, the installation went smoothly, efficiently, and ahead of schedule. The extra time afforded by the efficient observatory deployment was used for coring in Holes C0010B–C0010E. Despite challenging hole conditions, the depth interval corresponding to the screened casing across the megasplay fault was successfully sampled in Hole C0010C, and the footwall of the megasplay was sampled in Hole C0010E, with >50% recovery for both zones. In the hanging wall of the megasplay fault (Holes C0010C and C0010D), we recovered indurated silty clay with occasional ash layers and sedimentary breccias. Some of the deposits show burrows and zones of diagenetic alteration/colored patches. Mudstones show different degrees of deformation spanning from occasional fractures to intervals of densely fractured scaly claystones of up to >10 cm thickness. Sparse faulting with low displacement (usually <2 cm) is seen in core and exhibits primarily normal and, rarely, reversed sense of slip. When present, ash was entrained along fractures and faults. On one occasion, a ~10 cm thick ash layer was found, which showed a fining-downward gradation into a mottled zone with clasts of the underlying silty claystones. In Hole C0010E, the footwall to the megasplay fault was recovered. Sediments are horizontally to gently dipping and mainly comprise silt of olive-gray color. The deposits of the underthrust sediment prism are less indurated than the hanging wall mudstones and show lamination on a centimeter scale. The material is less intensely deformed than the mudstones, and apart from occasional fracturation (some of it being drilling disturbance), evidence of structural features is absent. 
    more » « less