skip to main content


Title: Using Semantic-Web Technologies for Situation Assessments of Ethical Hacking High-Value Targets
Ethical hacking consists of scanning for targets, evaluating the targets, gaining access, maintaining access, and clearing tracks. The evaluation of targets represents a complex task due to the number of IP addresses, domain names, open ports, vulnerabilities, and exploits that must be examined. Ethical hackers synthesize data from various hacking tools to determine targets that are of high value and that are highly susceptible to cyber-attacks. These tasks represent situation assessment tasks. Previous research considers situation assessment tasks to be tasks that involve viewing an initial set of information about a problem and subsequently piecing together more information to solve the problem. Our research used semantic-web technologies, including ontologies, natural language processing (NLP), and semantic queries, to automate the situation assessment tasks conducted by ethical hackers when evaluating targets. More specifically, our research focused on automatically identifying education organizations that use industrial control system protocols which in turn have highly exploitable vulnerabilities and known exploits. We used semantic-web technologies to reduce an initial dataset of 126,636 potential targets to 155 distinct targets with these characteristics. Our research adds to previous research on situation assessment by showing how semantic-web technologies can be used to reduce the complexity of situation assessment tasks.  more » « less
Award ID(s):
1922202
NSF-PAR ID:
10327904
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
European Conference on Cyber Warfare and Security
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Industrial control systems (ICS) include systems that control industrial processes in critical infrastructure such as electric grids, nuclear power plants, manufacturing plans, water treatment systems, pharmaceutical plants, and building automation systems. ICS represent complex systems that contain an abundance of unique devices all of which may hold different types of software, including applications, firmware and operating systems. Due to their ability to control physical infrastructure, ICS have more and more become targets of cyber-attacks, increasing the risk of serious damage, negative financial impact, disruption to business operations, disruption to communities, and even the loss of life. Ethical hacking represents one way to test the security of ICS. Ethical hacking consists of using a cyber-attacker's perspective and a variety of cybersecurity tools to actively discover vulnerabilities and entry points for potential cyber-attacks. However, ICS ethical hacking represents a difficult task due to the wide variety of devices found on ICS networks. Most ethical hackers do not hold expertise or knowledge about ICS hardware, device computing elements, protocols, vulnerabilities found on these elements, and exploits used to exploit these vulnerabilities. Effective approaches are needed to reduce the complexity of ICS ethical hacking tasks. In this study, we use ontology modeling, a knowledge representation approach in artificial intelligence (AI), to model data that represent ethical hacking tasks of building automation systems. With ontology modeling, information is stored and represented in the form of semantic graphs that express individuals, their properties, and the relations between multiple individuals. Data are drawn from sources such as the National Vulnerability Database, ExploitDB, Common Weakness Enumeration (CWE), the Common Attack Pattern and Enumeration Classification (CAPEC), and others. We show, through semantic queries, how the ontology model can automatically link together entities such as software names and versions of ICS software, vulnerabilities found on those software instances, vulnerabilities found on the protocols used by the software, exploits found on those vulnerabilities, weaknesses that represent those vulnerabilities, and attacks that can exploit those weaknesses. The ontology modeling of ICS ethical hacking and the semantic queries run over the model can reduce the complexity of ICS hacking tasks. 
    more » « less
  2. Black hat hackers use malicious exploits to circumvent security controls and take advantage of system vulnerabilities worldwide, costing the global economy over $450 billion annually. While many organizations are increasingly turning to cyber threat intelligence (CTI) to help prioritize their vulnerabilities, extant CTI processes are often criticized as being reactive to known exploits. One promising data source that can help develop proactive CTI is the vast and ever-evolving Dark Web. In this study, we adopted the computational design science paradigm to design a novel deep learning (DL)-based exploit-vulnerability attention deep structured semantic model (EVA-DSSM) that includes bidirectional processing and attention mechanisms to automatically link exploits from the Dark Web to vulnerabilities. We also devised a novel device vulnerability severity metric (DVSM) that incorporates the exploit post date and vulnerability severity to help cybersecurity professionals with their device prioritization and risk management efforts. We rigorously evaluated the EVA-DSSM against state-of-the-art non-DL and DL-based methods for short text matching on 52,590 exploit-vulnerability linkages across four testbeds: web application, remote, local, and denial of service. Results of these evaluations indicate that the proposed EVA-DSSM achieves precision at 1 scores 20% - 41% higher than non-DL approaches and 4% - 10% higher than DL-based approaches. We demonstrated the EVA-DSSM’s and DVSM’s practical utility with two CTI case studies: openly accessible systems in the top eight U.S. hospitals and over 20,000 Supervisory Control and Data Acquisition (SCADA) systems worldwide. A complementary user evaluation of the case study results indicated that 45 cybersecurity professionals found the EVA-DSSM and DVSM results more useful for exploit-vulnerability linking and risk prioritization activities than those produced by prevailing approaches. Given the rising cost of cyberattacks, the EVA-DSSM and DVSM have important implications for analysts in security operations centers, incident response teams, and cybersecurity vendors. 
    more » « less
  3. Black hat hackers use malicious exploits to circumvent security controls and take advantage of system vulnerabilities worldwide, costing the global economy over $450 billion annually. While many organizations are increasingly turning to cyber threat intelligence (CTI) to help prioritize their vulnerabilities, extant CTI processes are often criticized as being reactive to known exploits. One promising data source that can help develop proactive CTI is the vast and ever-evolving Dark Web. In this study, we adopted the computational design science paradigm to design a novel deep learning (DL)-based exploit-vulnerability attention deep structured semantic model (EVA-DSSM) that includes bidirectional processing and attention mechanisms to automatically link exploits from the Dark Web to vulnerabilities. We also devised a novel device vulnerability severity metric (DVSM) that incorporates the exploit post date and vulnerability severity to help cybersecurity professionals with their device prioritization and risk management efforts. We rigorously evaluated the EVA-DSSM against state-of-the-art non-DL and DL-based methods for short text matching on 52,590 exploit-vulnerability linkages across four testbeds: web application, remote, local, and denial of service. Results of these evaluations indicate that the proposed EVA-DSSM achieves precision at 1 scores 20%-41% higher than non-DL approaches and 4%-10% higher than DL-based approaches. We demonstrated the EVA-DSSM's and DVSM's practical utility with two CTI case studies: openly accessible systems in the top eight U.S. hospitals and over 20,000 Supervisory Control and Data Acquisition (SCADA) systems worldwide. A complementary user evaluation of the case study results indicated that 45 cybersecurity professionals found the EVA-DSSM and DVSM results more useful for exploit-vulnerability linking and risk prioritization activities than those produced by prevailing approaches. Given the rising cost of cyberattacks, the EVA-DSSM and DVSM have important implications for analysts in security operations centers, incident response teams, and cybersecurity vendors. 
    more » « less
  4. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  5. Organizations and communities learn by collecting information both from their direct, experiences and by observing others. Information is translated into knowledge, which is disseminated and used to inform subsequent planning, decisions and actions. Among the experiences and observations of organizations and communities that can be translated into knowledge are crises and disasters, including infections disease outbreaks, water contamination events and natural disasters. Organizational and community learning occurs when knowledge generated in response to crises is applied and when previous events serve as the basis for informing responses to an anticipated risk or emerging crisis. Trial-and-error learning is an ongoing process of experimentation, assessment and evaluation. Through trial and error, it is possible to determine that an activity does not produce desired outcomes, allowing for strategies to be adjusted and refined. Communities and organizations also benefit from observing others facing similar threats and learn from their failures and successes. Vicarious learning is bolstered through publicly available information, such as media reports and web presence, and access to networks of similar organizations. Crises can provide opportunities to re-evaluate fundamental assumptions, norms, processes, structures, plans, technologies, and overall performance. This session provides an overview of learning from crises and presents cases from the COVID-19 pandemic response, water contamination events, and natural disasters. The COVID-19 response in the City of Detroit offers important lessons about public health disparities, community engagement, and sustained responses. Cases studies of learning from the Flint Water Crisis and the Toledo Water Crisis illustrate how organizations and communities can translate experience into knowledge. Natural disasters can reveal systemic vulnerabilities and deficiencies in knowledge. Winter Storm Uri impacted Texas in mid-February 2021, bringing cold temperatures, record-levels of snow, and damaging ice and devastating the electrical grid, prompting widespread boil water notices. This case provides lessons about informing the public about emerging risks and about how they respond. These cases show how organizational learning may help organizations and communities prevent the repetition of a similar crisis, plan and respond more effectively. 
    more » « less