Image classifiers have become an important component of today’s software, from consumer and business applications to safety-critical domains. The advent of Deep Neural Networks (DNNs) is the key catalyst behind such wide-spread success. However, wide adoption comes with serious concerns about the robustness of software systems dependent on image classification DNNs, as several severe erroneous behaviors have been reported under sensitive and critical circumstances. We argue that developers need to rigorously test their software’s image classifiers and delay deployment until acceptable. We present an approach to testing image classifier robustness based on class property violations. We have found that many of the reported erroneous cases in popular DNN image classifiers occur because the trained models confuse one class with another or show biases towards some classes over others. These bugs usually violate some class properties of one or more of those classes. Most DNN testing techniques focus on per-image violations and thus fail to detect such class-level confusions or biases. We developed a testing approach to automatically detect class-based confusion and bias errors in DNN-driven image classification software. We evaluated our implementation, DeepInspect, on several popular image classifiers with precision up to 100% (avg. 72.6%) for confusion errors, and up to 84.3% (avg. 66.8%) for bias errors. DeepInspect found hundreds of classification mistakes in widely-used models, many of which expose errors indicating confusion or bias.
more »
« less
Bias and fairness in software and automation tools in digital forensics
The proliferation of software tools and automated techniques in digital forensics has brought about some controversies regarding bias and fairness. Different biases exist and have been proven in some civil and criminal cases. In our research, we analyze and discuss these biases present in software tools and automation systems used by law enforcement organizations and in court proceedings. Furthermore, we present real-life cases and scenarios where some of these biases have determined or influenced these cases. We were also able to provide recommendations for reducing bias in software tools, which we hope will be the foundation for a framework that reduces or eliminates bias from software tools used in digital forensics. In conclusion, we anticipate that this research can help increase validation in digital forensics software tools and ensure users' trust in the tools and automation techniques.
more »
« less
- Award ID(s):
- 2234710
- PAR ID:
- 10548892
- Publisher / Repository:
- OAE Publishing
- Date Published:
- Journal Name:
- Journal of Surveillance, Security and Safety
- Volume:
- 5
- Issue:
- 1
- ISSN:
- 2694-1015
- Page Range / eLocation ID:
- 19 to 35
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)We found that many of the reported erroneous cases in popular DNN image classifiers occur because the trained models confuse one class with another or show biases towards some classes over others. Most existing DNN testing techniques focus on per-image violations, so fail to detect class-level confusions or biases. We developed a testing technique to automatically detect class-based confusion and bias errors in DNN-driven image classification software. We evaluated our implementation, DeepInspect, on several popular image classifiers with precision up to 100% (avg. 72.6%) for confusion errors, and up to 84.3% (avg. 66.8%) for bias errors.more » « less
-
The pervasive use of databases for the storage of critical and sensitive information in many organizations has led to an increase in the rate at which databases are exploited in computer crimes. While there are several techniques and tools available for database forensics, they mostly assume apriori database preparation, such as relying on tamper-detection software to already be in place or use of detailed logging. Alternatively, investigators need forensic tools and techniques that work on poorly-configured databases and make no assumptions about the extent of damage in a database. In this paper, we present our database forensics methods, which are capable of examining database content from a database image without using any log or system metadata. We describe how these methods can be used to detect security breaches in untrusted environments where the security threat arose from a privileged user (or someone who has obtained such privileges).more » « less
-
As the drone becomes widespread in numerous crucial applications with many powerful functionalities (e.g., reconnaissance and mechanical trigger), there are increasing cases related to misused drones for unethical even criminal activities. Therefore, it is of paramount importance to identify these malicious drones and track their origins using digital forensics. Traditional drone identification techniques for forensics (e.g., RF communication, ID landmarks using a camera, etc.) require high compliance of drones. However, malicious drones will not cooperate or even spoof these identification techniques. Therefore, we present an exploration for a reliable and passive identification approach based on unique hardware traits in drones directly (e.g., analogous to the fingerprint and iris in humans) for forensics purposes. Specifically, we investigate and model the behavior of the parasitic electronic elements under RF interrogation, a particular passive parasitic response modulated by an electronic system on drones, which is distinctive and unlikely to counterfeit. Based on this theory, we design and implement DroneTrace, an end-to-end reliable and passive identification system toward digital drone forensics. DroneTrace comprises a cost-effective millimeter-wave (mmWave) probe, a software framework to extract and process parasitic responses, and a customized deep neural network (DNN)-based algorithm to analyze and identify drones. We evaluate the performance of DroneTrace with 36 commodity drones. Results show that DroneTrace can identify drones with the accuracy of over 99% and an equal error rate (EER) of 0.009, under a 0.1-second sensing time budget. Moreover, we test the reliability, robustness, and performance variation under a set of real-world circumstances, where DroneTrace maintains accuracy of over 98%. DroneTrace is resilient to various attacks and maintains functionality. At its best, DroneTrace has the capacity to identify individual drones at the scale of 104 with less than 5% error.more » « less
-
Previous research has revealed that newcomer women are disproportionately affected by gender-biased barriers in open source software (OSS) projects. However, this research has focused mainly on social/cultural factors, neglecting the software tools and infrastructure. To shed light on how OSS tools and infrastructure might factor into OSS barriers to entry, we conducted two studies: (1) a field study with five teams of software professionals, who worked through five use cases to analyze the tools and infrastructure used in their OSS projects; and (2) a diary study with 22 newcomers (9 women and 13 men) to investigate whether the barriers matched the ones identified by the software professionals. The field study produced a bleak result: software professionals found gender biases in 73% of all the newcomer barriers they identified. Further, the diary study confirmed these results: Women newcomers encountered gender biases in 63% of barriers they faced. Fortunately, many the kinds of barriers and biases revealed in these studies could potentially be ameliorated through changes to the OSS software environments and tool.more » « less
An official website of the United States government

