The Internet of Things (IoT) is a network of sensors that helps collect data 24/7 without human intervention. However, the network may suffer from problems such as the low battery, heterogeneity, and connectivity issues due to the lack of standards. Even though these problems can cause several performance hiccups, security issues need immediate attention because hackers access vital personal and financial information and then misuse it. These security issues can allow hackers to hijack IoT devices and then use them to establish a Botnet to launch a Distributed Denial of Service (DDoS) attack. Blockchain technology can provide security to IoT devices by providing secure authentication using public keys. Similarly, Smart Contracts (SCs) can improve the performance of the IoT–blockchain network through automation. However, surveyed work shows that the blockchain and SCs do not provide foolproof security; sometimes, attackers defeat these security mechanisms and initiate DDoS attacks. Thus, developers and security software engineers must be aware of different techniques to detect DDoS attacks. In this survey paper, we highlight different techniques to detect DDoS attacks. The novelty of our work is to classify the DDoS detection techniques according to blockchain technology. As a result, researchers can enhance their systems by usingmore »
WiP: The Intrinsic Dimensionality of IoT Networks
The Internet of Things (IoT) is revolutionizing society by connect-
ing people and devices seamlessly and providing enhanced user
experience and functionalities. However, the unique properties of
IoT networks, such as heterogeneity and non-standardized protocol,
have created critical security holes and network mismanagement.
We propose a new measurement tool for IoT network data to aid
in analyzing and classifying such network traffic. We use evidence
from both security and machine learning research, which suggests
that the complexity of a dataset can be used as a metric to determine
the trustworthiness of data. We test the complexity of IoT networks
using Intrinsic Dimensionality (ID), a theoretical complexity mea-
surement based on the observation that a few variables can often
describe high dimensional datasets. We use ID to evaluate four mod-
ern IoT network datasets empirically, showing that, for network
and device-level data generated using IoT methodologies, the ID
of the data fits into a low dimensional representation; this makes
such data amenable to the use of machine learning algorithms for
anomaly detection.
- Publication Date:
- NSF-PAR ID:
- 10334995
- Journal Name:
- SACMAT ’22, June 8ś10, 2022, New York, NY, USA
- Page Range or eLocation-ID:
- 245 to 250
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Obeid, I. (Ed.)The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do notmore »
-
The advances in deep neural networks (DNN) have significantly enhanced real-time detection of anomalous data in IoT applications. However, the complexity-accuracy-delay dilemma persists: Complex DNN models offer higher accuracy, but typical IoT devices can barely afford the computation load, and the remedy of offloading the load to the cloud incurs long delay. In this article, we address this challenge by proposing an adaptive anomaly detection scheme with hierarchical edge computing (HEC). Specifically, we first construct multiple anomaly detection DNN models with increasing complexity and associate each of them to a corresponding HEC layer. Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network . We also incorporate a parallelism policy training method to accelerate the training process by taking advantage of distributed models. We build an HEC testbed using real IoT devices and implement and evaluate our contextual-bandit approach with both univariate and multivariate IoT datasets. In comparison with both baseline and state-of-the-art schemes, our adaptive approach strikes the best accuracy-delay tradeoff on the univariate dataset and achieves the best accuracy and F1-score on the multivariate dataset with only negligibly longer delay than the best (butmore »
-
BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information ofmore »
-
Understanding the fundamental mechanism behind the success of deep neural networks is one of the key challenges in the modern machine learning literature. Despite numerous attempts, a solid theoretical analysis is yet to be developed. In this paper, we develop a novel unified framework to reveal a hidden regularization mechanism through the lens of convex optimization. We first show that the training of multiple threelayer ReLU sub-networks with weight decay regularization can be equivalently cast as a convex optimization problem in a higher dimensional space, where sparsity is enforced via a group `1- norm regularization. Consequently, ReLU networks can be interpreted as high dimensional feature selection methods. More importantly, we then prove that the equivalent convex problem can be globally optimized by a standard convex optimization solver with a polynomial-time complexity with respect to the number of samples and data dimension when the width of the network is fixed. Finally, we numerically validate our theoretical results via experiments involving both synthetic and real datasets.