Successful malware campaigns often rely on the ability of infected hosts to locate and contact their command-and-control (C2) servers. Malware campaigns often use DNS domains for this purpose, but DNS domains may be taken down by the registrar that sold them. In response to this threat, malware operators have begun using blockchain-based naming systems to store C2 server names. Blockchain naming systems are a threat to malware defenders because they are not subject to a centralized authority, such as a registrar, that can take down abused domains, either voluntarily or under legal pressure. In fact, blockchains are robust against a variety of interventions that work on DNS domains, which is bad news for defenders. We analyze the ecosystem of blockchain naming systems and identify new locations for defenders to stage interventions against malware. In particular, we find that malware is obligated to use centralized or semi-centralized infrastructure to connect to blockchain naming systems and modify the records stored within. In fact, scattered interventions have already been staged against this centralized infrastructure: we present case studies of several such instances. We also present a study of how blockchain naming systems are currently abused by malware operators, and discuss the factors that would cause a blockchain naming system to become an unstoppable threat. We conclude that existing blockchain naming systems still provide opportunities for defenders to prevent malware from contacting its C2 servers.
more »
« less
Cracking Wall of Confinement: Understanding and Analyzing Malicious Domain Takedowns
Take-down operations aim to disrupt cybercrime involving malicious domains. In the past decade, many successful take-down operations have been reported, including those against the Conficker worm, and most recently, against VPNFilter. Although it plays an important role in fighting cybercrime, the domain take-down procedure is still surprisingly opaque. There seems to be no in-depth understanding about how the take-down operation works and whether there is due diligence to ensure its security and reliability. In this paper, we report the first systematic study on domain takedown. Our study was made possible via a large collection of data, including various sinkhole feeds and blacklists, passive DNS data spanning six years, and historical WHOIS information. Over these datasets, we built a unique methodology that extensively used various reverse lookups and other data analysis techniques to address the challenges in identifying taken-down domains, sinkhole operators, and take-down durations. Applying the methodology on the data, we discovered over 620K takendown domains and conducted a longitudinal analysis on the take-down process, thus facilitating a better understanding of the operation and its weaknesses. We found that more than 14% of domains taken-down over the past ten months have been released back to the domain market and that some of the released domains have been repurchased by the malicious actor again before being captured and seized, either by the same or different sinkholes. In addition, we showed that the misconfiguration of DNS records corresponding to the sinkholed domains allowed us to hijack a domain that was seized by the FBI. Further, we found that expired sinkholes have caused the transfer of around 30K takendown domains whose traffic is now under the control of new owners.
more »
« less
- Award ID(s):
- 1801432
- PAR ID:
- 10097935
- Date Published:
- Journal Name:
- the 26th Annual Network and Distributed System Security Symposium
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Sinkholes are the most abundant surface features in karst areas worldwide. Understanding sinkhole occurrences and characteristics is critical for studying karst aquifers and mitigating sinkhole‐related hazards. Most sinkholes appear on the land surface as depressions or cover collapses and are commonly mapped from elevation data, such as digital elevation models (DEMs). Existing methods for identifying sinkholes from DEMs often require two steps: locating surface depressions and separating sinkholes from non‐sinkhole depressions. In this study, we explored deep learning to directly identify sinkholes from DEM data and aerial imagery. A key contribution of our study is an evaluation of various ways of integrating these two types of raster data. We used an image segmentation model, U‐Net, to locate sinkholes. We trained separate U‐Net models based on four input images of elevation data: a DEM image, a slope image, a DEM gradient image, and a DEM‐shaded relief image. Three normalization techniques (Global, Gaussian, and Instance) were applied to improve the model performance. Model results suggest that deep learning is a viable method to identify sinkholes directly from the images of elevation data. In particular, DEM gradient data provided the best input for U‐net image segmentation models to locate sinkholes. The model using the DEM gradient image with Gaussian normalization achieved the best performance with a sinkhole intersection‐over‐union (IoU) of 45.38% on the unseen test set. Aerial images, however, were not useful in training deep learning models for sinkholes as the models using an aerial image as input achieved sinkhole IoUs below 3%.more » « less
-
null (Ed.)Sinkholes are common and naturally occurring in certain areas such as Florida and Southern Georgia. The region’s aquifer is often covered by limestone or dolomite carbonate rock, which are made up of minerals that can dissolve in water under the right conditions. Anthropogenic changes are leading to an increased risk of sinkholes in susceptible areas. The formation of these geologic features is hastened by the improper management of ground water, the increase in watershed pollution and runoff, and the mismanagement of underground fresh and wastewater pipes and structures. The goal of this study is to develop an automated geospatial model to determine areas within the study having a potential high risk for sinkholes. Eleven types of geospatial data were collected, processed, and analyzed in ArcGIS Pro Model Builder to calculate sinkhole vulnerability layers in the study area. The eleven data types were geology, soil, land use, aquifer, ground water measurements, road, fault line, elevation precipitation, and evapotranspiration. From this data, ten sinkhole vulnerability layers were produced: 1) subsidence or surface change, 2) average aquifer well depth, 3) ground water vulnerability (DRASTIC), 4) road density, 5) groundwater travel time, 6) aquifer media (Suwannee Limestone) , 7) geology type, 8) slope, 9) land use, and 10) distance from fault lines. Each layer was reclassified and reassigned a value from 1 to 10 according to its sinkhole vulnerability. The weighted layers were analyzed interpretively using ArcGIS Pro’s weighted sum tool producing a Sinkhole Risk Probability Raster. The sampling tool was used for accuracy assessment by comparing the obtained result with historical sinkhole data. This method showed 77% accuracy between known sinkholes and those shown on the sinkholes probability raster. This study is useful to environmental planners/managers and other stakeholders for decision support.more » « less
-
One of the main roles of the Domain Name System (DNS) is to map domain names to IP addresses. Despite the importance of this function, DNS traffic often passes without being analyzed, thus making the DNS a center of attacks that keep evolving and growing. Software-based mitigation approaches and dedicated state-of-the-art firewalls can become a bottleneck and are subject to saturation attacks, especially in high-speed networks. The emerging P4-programmable data plane can implement a variety of network security mitigation approaches at high-speed rates without disrupting legitimate traffic. This paper describes a system that relies on programmable switches and their stateful processing capabilities to parse and analyze DNS traffic solely in the data plane, and subsequently apply security policies on domains according to the network administrator. In particular, Deep Packet Inspection (DPI) is leveraged to extract the domain name consisting of any number of labels and hence, apply filtering rules (e.g., blocking malicious domains). Evaluation results show that the proposed approach can parse more domain labels than any state-of-the-art P4-based approach. Additionally, a significant performance gain is attained when comparing it to a traditional software firewall -pfsense-, in terms of throughput, delay, and packet loss. The resources occupied by the implemented P4 program are minimal, which allows for more security functionalities to be added.more » « less
-
null (Ed.)We identify over a quarter of a million domains used by medium and large companies within the .com registry. We find that for around 7% of these companies very similar domain names have been registered with character changes that are intended to be indistinguishable at a casual glance. These domains would be suitable for use in Business Email Compromise frauds. Using historical registration and name server data we identify the timing, rate, and movement of these look-alike domains over a ten year period. This allows us to identify clusters of registrations which are quite clearly malicious and show how the criminals have moved their activity over time in response to countermeasures. Although the malicious activity peaked in 2016, there is still sufficient ongoing activity to cause concern.more » « less