skip to main content

This content will become publicly available on June 25, 2022

Title: A Reality-Conforming Approach for QoS Performance Analysis of AFDX in Cyber-Physical Avionics Systems
AFDX (Avionics Full Duplex Switched Ethernet) is developed to support mission-critical communications while providing deterministic Quality of Service (QoS) across cyber-physical avionics systems. Currently, AFDX utilizes FP/FIFO QoS mechanisms to guarantee its real-time performance. To analyze the real-time performance of avionic systems in their design processes, existing work analyzes the deterministic delay bound of AFDX using NC (Network Calculus). However, existing analytical work is based on an unrealistic assumption leading to assumed worst cases that may not be achievable in reality. In this paper, we present a family of algorithms that can search for realistic worst-case delay scenarios in both preemptive and non-preemptive situations. Then we integrate the proposed algorithms with NC and apply our approach to analyzing tandem AFDX networks. Our reality-conforming approach yields tighter delay bound estimations than the state of the art. When there are 100 virtual links in AFDX networks, our method can provide delay bounds more than 25% tighter than those calculated by the state of the art in our evaluation. Moreover, when using our reality-conforming method in the design process, it leads to 27.2% increase in the number of virtual links accommodated by the network in the tandem scenario.
Authors:
;
Award ID(s):
2146968 1646458
Publication Date:
NSF-PAR ID:
10297125
Journal Name:
2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS)
Page Range or eLocation-ID:
1 to 6
Sponsoring Org:
National Science Foundation
More Like this
  1. Most real-world networks are incompletely observed. Algorithms that can accurately predict which links are missing can dramatically speed up network data collection and improve network model validation. Many algorithms now exist for predicting missing links, given a partially observed network, but it has remained unknown whether a single best predictor exists, how link predictability varies across methods and networks from different domains, and how close to optimality current methods are. We answer these questions by systematically evaluating 203 individual link predictor algorithms, representing three popular families of methods, applied to a large corpus of 550 structurally diverse networks from sixmore »scientific domains. We first show that individual algorithms exhibit a broad diversity of prediction errors, such that no one predictor or family is best, or worst, across all realistic inputs. We then exploit this diversity using network-based metalearning to construct a series of “stacked” models that combine predictors into a single algorithm. Applied to a broad range of synthetic networks, for which we may analytically calculate optimal performance, these stacked models achieve optimal or nearly optimal levels of accuracy. Applied to real-world networks, stacked models are superior, but their accuracy varies strongly by domain, suggesting that link prediction may be fundamentally easier in social networks than in biological or technological networks. These results indicate that the state of the art for link prediction comes from combining individual algorithms, which can achieve nearly optimal predictions. We close with a brief discussion of limitations and opportunities for further improvements.

    « less
  2. Attributed network embedding aims to learn lowdimensional vector representations for nodes in a network, where each node contains rich attributes/features describing node content. Because network topology structure and node attributes often exhibit high correlation, incorporating node attribute proximity into network embedding is beneficial for learning good vector representations. In reality, large-scale networks often have incomplete/missing node content or linkages, yet existing attributed network embedding algorithms all operate under the assumption that networks are complete. Thus, their performance is vulnerable to missing data and suffers from poor scalability. In this paper, we propose a Scalable Incomplete Network Embedding (SINE) algorithm formore »learning node representations from incomplete graphs. SINE formulates a probabilistic learning framework that separately models pairs of node-context and node-attribute relationships. Different from existing attributed network embedding algorithms, SINE provides greater flexibility to make the best of useful information and mitigate negative effects of missing information on representation learning. A stochastic gradient descent based online algorithm is derived to learn node representations, allowing SINE to scale up to large-scale networks with high learning efficiency. We evaluate the effectiveness and efficiency of SINE through extensive experiments on real-world networks. Experimental results confirm that SINE outperforms state-of-the-art baselines in various tasks, including node classification, node clustering, and link prediction, under settings with missing links and node attributes. SINE is also shown to be scalable and efficient on large-scale networks with millions of nodes/edges and high-dimensional node features.« less
  3. Recent effort to test deep learning systems has produced an intuitive and compelling test criterion called neuron coverage (NC), which resembles the notion of traditional code coverage. NC measures the proportion of neurons activated in a neural network and it is implicitly assumed that increasing NC improves the quality of a test suite. In an attempt to automatically generate a test suite that increases NC, we design a novel diversity promoting regularizer that can be plugged into existing adversarial attack algorithms. We then assess whether such attempts to increase NC could generate a test suite that (1) detects adversarial attacksmore »successfully, (2) produces natural inputs, and (3) is unbiased to particular class predictions. Contrary to expectation, our extensive evaluation finds that increasing NC actually makes it harder to generate an effective test suite: higher neuron coverage leads to fewer defects detected, less natural inputs, and more biased prediction preferences. Our results invoke skepticism that increasing neuron coverage may not be a meaningful objective for generating tests for deep neural networks and call for a new test generation technique that considers defect detection, naturalness, and output impartiality in tandem.« less
  4. Applications in environmental monitoring, surveillance and patrolling typically require a network of mobile agents to collectively gain information regarding the state of a static or dynamical process evolving over a region. However, these networks of mobile agents also introduce various challenges, including intermittent observations of the dynamical process, loss of communication links due to mobility and packet drops, and the potential for malicious or faulty behavior by some of the agents. The main contribution of this paper is the development of resilient, fully-distributed, and provably correct state estimation algorithms that simultaneously account for each of the above considerations, and inmore »turn, offer a general framework for reasoning about state estimation problems in dynamic, failure-prone and adversarial environments. Specifically, we develop a simple switched linear observer for dealing with the issue of time-varying measurement models, and resilient filtering techniques for dealing with worst-case adversarial behavior subject to time-varying communication patterns among the agents. Our approach considers both communication patterns that recur in a deterministic manner, and patterns that are induced by random packet drops. For each scenario, we identify conditions on the dynamical system, the patrols, the nominal communication network topology, and the failure models that guarantee applicability of our proposed techniques. Finally, we complement our theoretical results with detailed simulations that illustrate the efficacy of our algorithms in the presence of the technical challenges described above.« less
  5. The Lovász Local Lemma (LLL) is a cornerstone principle in the probabilistic method of combinatorics, and a seminal algorithm of Moser & Tardos (2010) provides an efficient randomized algorithm to implement it. This algorithm can be parallelized to give an algorithm that uses polynomially many processors and runs in O(log3 n) time, stemming from O(log n) adaptive computations of a maximal independent set (MIS). Chung et al. (2014) developed faster local and parallel algorithms, potentially running in time O (log^2 n), but these algorithms work under significantly more stringent conditions than the LLL. We give a new parallel algorithm thatmore »works under essentially the same conditions as the original algorithm of Moser & Tardos but uses only a single MIS computation, thus running in O(log^2 n) time. This conceptually new algorithm also gives a clean combinatorial description of a satisfying assignment which might be of independent interest. Our techniques extend to the deterministic LLL algorithm given by Chandrasekaran et al. (2013) leading to an NC-algorithm running in time O(log^2 n) as well. We also provide improved bounds on the runtimes of the sequential and parallel resampling-based algorithms originally developed by Moser & Tardos. Our bounds extend to any problem instance in which the tighter Shearer LLL criterion is satisfied. We also improve on the analysis of Kolipaka & Szegedy (2011) to give tighter concentration results.« less