skip to main content


This content will become publicly available on August 9, 2024

Title: Can a Deep Learning Model for One Architecture Be Used for Others? Retargeted-Architecture Binary Code Analysis
Award ID(s):
2304720
NSF-PAR ID:
10463354
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
The 32nd USENIX Security Symposium
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As data privacy continues to be a crucial human-right concern as recognized by the UN, regulatory agencies have demanded developers obtain user permission before accessing user-sensitive data. Mainly through the use of privacy policies statements, developers fulfill their legal requirements to keep users abreast of the requests for their data. In addition, platforms such as Android enforces explicit permission request using the permission model. Nonetheless, recent research has shown that service providers hardly make full disclosure when requesting data in these statements. Neither is the current permission model designed to provide adequate informed consent. Often users have no clear understanding of the reason and scope of usage of the data request. This paper proposes an unambiguous, informed consent process that provides developers with a standardized method for declaring Intent. Our proposed Intent-aware permission architecture extends the current Android permission model with a precise mechanism for full disclosure of purpose and scope limitation. The design of which is based on an ontology study of data requests purposes. The overarching objective of this model is to ensure end-users are adequately informed before making decisions on their data. Additionally, this model has the potential to improve trust between end-users and developers. 
    more » « less
  2. null (Ed.)
  3. Existing Neural Architecture Search (NAS) methods either encode neural architectures using discrete encodings that do not scale well, or adopt supervised learning-based methods to jointly learn architecture representations and optimize architecture search on such representations which incurs search bias. Despite the widespread use, architecture representations learned in NAS are still poorly understood. We observe that the structural properties of neural architectures are hard to preserve in the latent space if architecture representation learning and search are coupled, resulting in less effective search performance. In this work, we find empirically that pre-training architecture representations using only neural architectures without their accuracies as labels improves the downstream architecture search efficiency. To explain this finding, we visualize how unsupervised architecture representation learning better encourages neural architectures with similar connections and operators to cluster together. This helps map neural architectures with similar performance to the same regions in the latent space and makes the transition of architectures in the latent space relatively smooth, which considerably benefits diverse downstream search strategies. 
    more » « less