skip to main content


Title: Crowdsourced Misuse Detection in Dynamic Spectrum Sharing Wireless Networks
To address the scarcity of spectrum, FCC mandated the dynamic sharing of spectrum among the different tiers of users. The success of spectrum sharing, however, relies on the automated enforcement of spectrum policies. We focus on ex post spectrum enforcement during/after the occurrence of a potentially harmful event, but before/after an actual harm has occurred. The major challenges addressed by us are to ensure maximum channel coverage in a given region of enforcement, accurate and reliable detection of enforcement, and selection of an efficient algorithm to select entities for detection of violation. We adopt a crowdsourced methodology to monitor spectrum usage. We ensure maximum coverage of the given area by dividing it into equal-sized regions and solve the enforcement problem by a divide and conquer mechanism over the entire region. We use a variant of the Multiple Choice Secretary algorithm to select volunteers. We finally simulate the enforcement framework and analyze the results.  more » « less
Award ID(s):
1642949
NSF-PAR ID:
10085497
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
ICN ... the ... International Conference on Networks
ISSN:
2308-4413
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The success of dynamic spectrum sharing in wireless networks depends on reliable automated enforcement of spectrum access policies. In this paper, a crowdsourced approach is used to select volunteers to detect spectrum misuse. Volunteer selection is based on multiple criteria, including their reputation, likelihood of being in a region and ability to effectively detect channel misuse. We formulate the volunteer selection problem as a stable matching problem, whereby, volunteers' monitoring preferences are matched to channels' attributes. Given a set of volunteers, the objective is to ensure maximum coverage of the spectrum enforcement area and accurate detection of spectrum access violation of all channels in the area. The two matching algorithms, Volunteer Matching (VM) and Reverse Volunteer Matching (RVM) are based on variants of the Gale-Shapley algorithm for stable matching. We also propose two Hybrid algorithms, HYBRID-VM and HYBRID-RVM that augment the matching algorithms with a Secretary-based algorithm to overcome the shortcomings of the individual vanilla algorithms. Simulation results show that volunteer selection by using HYBRID-VM gives better coverage of region (better by 19.2% when compared to threshold-based Secretary algorithm), better accuracy of detection and better volunteer happiness when compared to the other algorithms that are tested. 
    more » « less
  2. null (Ed.)
    Cooperative wireless networks, enabled by Cognitive Radios, facilitate mobile users to dynamically share access to spectrum. However, spectrum bands can be accessed illegitimately by malicious users. Therefore, the success of dynamic spectrum sharing relies on automated enforcement of spectrum policies. While the focus has been on ex ante spectrum enforcement, this work explores new approaches to address efficient ex post spectrum enforcement. The main objective of this work is to ensure maximum coverage of the area of enforcement and accurate detection of spectrum access violation. The first objective is achieved with the help of Lloyd's algorithm to divide the enforcement area into a set of uniformly sized coverage regions. The interference detection accuracy is achieved through crowdsourcing of the spectrum access monitoring to volunteers, based on their computational capabilities, location attributes and reputation. A simulation framework was developed in CSIM19 (C++ version) to analyze the performance of the proposed system over the entire area of enforcement. The results show that the proposed scheme ensures efficient coverage of all the channels and regions in the area of enforcement and a high average accuracy of detection. 
    more » « less
  3. Context.3C 84 is a nearby radio source with a complex total intensity structure, showing linear polarisation and spectral patterns. A detailed investigation of the central engine region necessitates the use of very-long-baseline interferometry (VLBI) above the hitherto available maximum frequency of 86 GHz.

    Aims.Using ultrahigh resolution VLBI observations at the currently highest available frequency of 228 GHz, we aim to perform a direct detection of compact structures and understand the physical conditions in the compact region of 3C 84.

    Methods.We used Event Horizon Telescope (EHT) 228 GHz observations and, given the limited (u, v)-coverage, applied geometric model fitting to the data. Furthermore, we employed quasi-simultaneously observed, ancillary multi-frequency VLBI data for the source in order to carry out a comprehensive analysis of the core structure.

    Results.We report the detection of a highly ordered, strong magnetic field around the central, supermassive black hole of 3C 84. The brightness temperature analysis suggests that the system is in equipartition. We also determined a turnover frequency ofνm = (113 ± 4) GHz, a corresponding synchrotron self-absorbed magnetic field ofBSSA = (2.9 ± 1.6) G, and an equipartition magnetic field ofBeq = (5.2 ± 0.6) G. Three components are resolved with the highest fractional polarisation detected for this object (mnet = (17.0 ± 3.9)%). The positions of the components are compatible with those seen in low-frequency VLBI observations since 2017–2018. We report a steeply negative slope of the spectrum at 228 GHz. We used these findings to test existing models of jet formation, propagation, and Faraday rotation in 3C 84.

    Conclusions.The findings of our investigation into different flow geometries and black hole spins support an advection-dominated accretion flow in a magnetically arrested state around a rapidly rotating supermassive black hole as a model of the jet-launching system in the core of 3C 84. However, systematic uncertainties due to the limited (u, v)-coverage, however, cannot be ignored. Our upcoming work using new EHT data, which offer full imaging capabilities, will shed more light on the compact region of 3C 84.

     
    more » « less
  4. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  5. In many countries, sharing has become a significant approach to problems of spectrum allocation and assignment. As this approach moves from concept to reality, it is reasonable to expect an increase in interference or usage conflict events between sharing parties. Scholars such as Coase, Demsetz, Stigler, and others have argued that appropriate enforcement is critical to successful contracts (such as spectrum sharing agreements) and Polinsky, Shavell, and others have analyzed enforcement mechanisms in general. While many ex-ante measures may be used, reducing the social costs of ex-ante enforcement means shifting the balance more toward ex-post measures. Ex post enforcement requires detection, data collection, and adjudication methods. At present, these methods are ad hoc (operating in a decentralized way between parties) or fairly costly (e.g., relying on the FCC Enforcement Bureau). The research presented in this paper is the culmination of an NSF-funded inquiry into how and what enforcement functions can be automated. 
    more » « less