<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Combating False Data Injection Attacks on Human-Centric Sensing Applications</title></titleStmt>
			<publicationStmt>
				<publisher>ACM</publisher>
				<date>07/04/2022</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10359528</idno>
					<idno type="doi">10.1145/3534577</idno>
					<title level='j'>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</title>
<idno>2474-9567</idno>
<biblScope unit="volume">6</biblScope>
<biblScope unit="issue">2</biblScope>					

					<author>Jingyu Xin</author><author>Vir V. Phoha</author><author>Asif Salekin</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[The recent prevalence of machine learning-based techniques and smart device embedded sensors has enabled widespread human-centric sensing applications. However, these applications are vulnerable to false data injection attacks (FDIA) that alter a portion of the victim's sensory signal with forged data comprising a targeted trait. Such a mixture of forged and valid signals successfully deceives the continuous authentication system (CAS) to accept it as an authentic signal. Simultaneously, introducing a targeted trait in the signal misleads human-centric applications to generate specific targeted inference; that may cause adverse outcomes. This paper evaluates the FDIA's deception efficacy on sensor-based authentication and human-centric sensing applications simultaneously using two modalities - accelerometer, blood volume pulse signals. We identify variations of the FDIA such as different forged signal ratios, smoothed and non-smoothed attack samples. Notably, we present a novel attack detection framework named Siamese-MIL that leverages the Siamese neural networks' generalizable discriminative capability and multiple instance learning paradigms through a unique sensor data representation. Our exhaustive evaluation demonstrates Siamese-MIL's real-time execution capability and high efficacy in different attack variations, sensors, and applications.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>on the smartphone that can manipulate the accelerometer sensory streams. E.g., SMASheD <ref type="bibr">[33]</ref> can enable a malicious app with only the INTERNET permission to manipulate motion sensors on unrooted Android devices. Installed malicious application can mislead the activity detection system to infer incorrect activities. E.g., the attacker can inject other people's jogging data portions into the user's accelerometer sensory stream to mislead the activity detection system that the victim is jogging. Such attack will inform misleading exercise measurements to the caregiver, which may lead to wrong follow-up interventions. If diabetes is not treated properly, it may further harm the patient's heart, eyes, kidneys, etc. <ref type="bibr">[5,</ref><ref type="bibr">34]</ref>, and increase the risk of death <ref type="bibr">[48]</ref>.</p><p>Scenario 2: A post-traumatic stress disorder (PTSD) patient wears a smart band/watch to monitor their stress levels. Though simple analysis such as step counting is performed on the wearable device itself, complicated applications like stress detection need the wearable to send raw sensing data to another device to get the data processed. E.g., Empatica E4, Embrace, HeartGuide, MobileHelp Smart are smart bands that connect to smartphones and transmit the collected physiological data through Bluetooth for advanced processing. During the data transmission, the FDIA attacker may access the Bluetooth communication and modifies the data packets. A tool like BlueDoor <ref type="bibr">[62]</ref> can break the confidentiality of Bluetooth and alter the communicated information. The attacker can mix the victim's physiological data with other individual's data under stress to cause an illusion that the victim is under long-term stress. Such an attack may result in a misdiagnosis which can bring inappropriate or even harmful treatment to the patient. It may even result in worsened suicidal or homicidal ideation <ref type="bibr">[59]</ref>. It also would lead to an additional cost (e.g., per person/year PTSD treatment cost is $16,750 in USA (2016) <ref type="bibr">[63]</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Assumptions</head><p>Each smart device has a unique owner in the threat model. Smart device embedded sensors are generating continuous data streams. We assume that the attacker has a large amount of sensory data of various traits (e.g., motion sensory data of different activities) from other individuals. These sensory data segments will be utilized as the signal portions injected on the target (victim) user's sensory streams to generate the FDIA samples.</p><p>Recent studies <ref type="bibr">[28,</ref><ref type="bibr">35,</ref><ref type="bibr">52,</ref><ref type="bibr">54]</ref> have evaluated continuous authentication systems (CAS) that verify the authenticity of the sensory data streams. CASs identify the characteristics of the sensory data indicative of the respective user's identity and repeatedly examine the authenticity of the continuous sensory streams. Our threat model considers a harder situation where a CAS is working in the background. Only the signal verified by the CAS can be further accepted by other sensing applications. Hence, a successful FDIA sample needs to deceive the CAS into thinking it is an authentic signal and the human-centric application to generate a misled inference. Additionally, FDIA considers that the attackers have no information about the victim, background sensing or CAS approaches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Attack Sample Generation</head><p>A challenge for FDIA samples generation is that, the replacement of the whole sensory data stream with signals containing misinformation from others can maximize the misleading effectiveness, but the CAS will easily reject such signal due of its high inconsistency with the legitimate user's data; thus, the attack signal cannot even reach the targeting sensing application. Therefore, we generate the attack sample as a mixture of legit and forged data, containing both legitimate patterns and misinformation. Such a mixture makes it harder to be detected by the CAS and can still mislead the sensing applications.</p><p>This paper evaluates the attack on &#119899;-length signals under the continuous sensing settings. For example, the original sensor reading &#119883; = {&#119909; 1 , &#119909; 2 , ..., &#119909; &#119899; } is &#119899;-length sequence. The attack can use &#119896; (&#119896; &#8805; 1) forged data segments from other individuals to replace &#119896; legitimate data portions. Consider &#119896; = 1, and a &#119898;-length (&#119898; &lt; &#119899;) forged data sequence &#119865; = {&#119891; 1 , &#119891; 2 , ..., &#119891; &#119898; }, is injected into the legitimate signal &#119883; . The generated attack sample will be &#119860; = {&#119909; 1 , &#119909; 2 , ..., &#119909; &#119894; , &#119891; 1 , &#119891; 2 , ..., &#119891; &#119898; , &#119909; &#119894;+&#119898;+1 , ..., &#119909; &#119899; } where a legitimate &#119898;-length signal portion is replaced by &#119865; at a random position (i.e., &#119894; + 1 to &#119894; + &#119898;). When &#119896; &gt; 1, there can be multiple legitimate signal portions to be replaced and the replacement signals can be from different individuals. This paper evaluates different characteristics of the attack samples that affect the FDIA's deception efficacy on authentication and sensing applications and the difficulty of detecting the attack samples. The characteristics are discussed below:</p><p>(1) Forged signal ratio (FSR). Consider a &#119899;-length sensory signal, where in total &#119905; &#119904; -length signal is replaced with other individuals' forged data. In this attack sample, the forged signal ratio (FSR) is &#119905; &#119904; &#119899; . Less injected signal portions in an attack sample generate a smaller FSR attack. Attack samples with smaller FSR have higher consistency with the legitimate data, making the attack detection more challenging. One trade-off for the attacker is, smaller FSR means a smaller portion of misinformation is injected into the signal. Hence, it can be less effective in misleading human-centric sensing applications. To ensure the robustness of the attack detection approach, we evaluate attack samples with FSRs from 10% to 90%. When FSR = 0%, no attack signal is injected, we consider it a pure sample; when FSR = 100%, the whole signal is replaced by others' data, it is considered a zero-effort attack <ref type="bibr">[22]</ref> sample which is detected by the CAS easily (discussed in Section 5.1). Thus, we do not include evaluation for 0% and 100% FSR samples.</p><p>(2) Effect of smoothing the boundaries between forged and legit signal. When the forged signal from different individuals are injected, the transition between the forged and legit signals can be inconsistent, thus distinguishable. Smoothing the boundaries may remove such inconsistency and make the attack samples harder to be identified. Therefore, we evaluate both smoothed and non-smoothed attack samples. Figure <ref type="figure">1</ref> shows an example of the attack signal generation to deceive an activity detection system. The legitimate user's accelerometer signal during walking activity is shown in the second row. This attack aims to misinform the activity detection system that the user is jogging. &#119896; = 2 forged jogging samples (top row) totaling &#119905; &#119904; length are selected from different individuals. These samples randomly replace &#119896; = 2 portions (shown as red boxes on second-row figure) of the original walking accelerometer signal, thus generate a synthetic attack sample (third row of Figure <ref type="figure">1</ref>). The signal transition on the boundary of the between the inserted signal portions is easily distinguishable (e.g., left boundary of red box). Hence, a smoothing operation is applied. The smoothed accelerometer attack sample is shown in the bottom row of Figure <ref type="figure">1</ref>. Such a mixture of legitimate user's data with other individuals' data makes it difficult to be detected by the CASs and misinforms the background activity monitoring system as a jogging activity. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">CONTRIBUTIONS</head><p>This paper is the first to address the FDIA on deceiving and misleading smart device authentication and humancentric applications with attack samples. The paper's novelty comes from formulating the FDIA detection problem as a multiple instance learning (MIL) problem. The FDIA detection identifies if a signal sample (subject to inspection) comprises at least a pair of segments belonging to different individuals. This paper performs this task through a novel framework named Siamese-MIL that leverages the MIL paradigms, Siamese network structure, and a unique sensor data representation. Unlike supervised learning or voting mechanisms, the presented approach learns to identify any segment pair containing signals from different individuals without such data annotations during training. Moreover, the MIL training paradigm effectively avoids potential bias due to the disproportional ratio of legit and forged signal segments in the data. Siamese-MIL approach is discussed in detail in Section 6.2.</p><p>In particular, this paper addresses the following research questions:</p><p>(1) Are smart device continuous authentication systems (CASs) effective in detecting FDIA?</p><p>(2) Can FDIA deceive human-centric sensing applications?</p><p>(3) How does the FSR of FDIA samples and smoothing operation affect the FDIA's deception efficacy? (4) What is the performance of the Siamese-MIL detection approach against smart device sensor FDIA? (5) How does the FSR of FDIA samples and smoothing operation affect the Siamese-MIL's performance? Using three datasets (BB-MAS <ref type="bibr">[42]</ref>, WISDM <ref type="bibr">[64]</ref> and WESAD <ref type="bibr">[53]</ref>) and two signal modalities (accelerometer (ACC) and blood volume pulse (BVP)), we have generated FDIA variations with different FSRs and smoothing operations. We evaluate FIDA's efficacy in deceiving two ACC-based authentication systems, ACC-based activity detection systems, and a BVP-based stress detection system, indicating that FDIA with 50-60% FSR is highly effective in deceiving both the CASs and the human-centric sensing applications. Additionally, smoothing operations does not increase FDIA's deception capability.</p><p>Siamese-MIL achieves an average 92.66% F1-score on the three datasets, showing its high efficacy in FDIA detection. The further evaluation shows that the Siamese-MIL and its integration to authentication (Appendix C) achieve a high attack detection accuracy against all FSR attack samples, and the smoothing operation does not significantly influence its performance. Additionally, we evaluate and demonstrate Siamese-MIL's real-time execution capability and resource usage in resource constraint smart devices.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">GENERATED FALSE DATA INJECTION ATTACK DATASETS AND DATA PROCESSING</head><p>To our knowledge, there is no existing human-centric sensing dataset that contains injection attack data. Following the recent false data injection attack work <ref type="bibr">[17,</ref><ref type="bibr">32]</ref>, we develop synthetic attack datasets to simulate the FDIA. We use three publicly available datasets -BB-MAS <ref type="bibr">[42]</ref>, WISDM <ref type="bibr">[64]</ref> and WESAD <ref type="bibr">[53]</ref> to generate FDIA samples on different applications: continuous authentication, activity detection and emotional stress detection.</p><p>For each of the evaluations, we follow the person-disjoint hold-out method <ref type="bibr">[9]</ref>. Each dataset contains data from different individuals. To avoid personal bias and make the models generalizable, we separate each dataset into person-disjoint training, validation, and test subsets. Every subset only contains data from some specific individuals, and one user's data won't appear in two subsets. This separation is performed randomly five times; hence we have five groups of person-disjoint training, validation, and test subset combinations. All presented results are averaged over the five groups to reduce contingency and avoid overfitting of the model. Detail discussion of each dataset is below: (1) BB-MAS Dataset <ref type="bibr">[42]</ref>  For all datasets: For each subject, we generate a same number of synthetic FDIA and pure data samples. An equal number of different attack data with FSR ranging from 10% to 90% is generated. To evaluate the effect of smoothing, we generate two synthetic dataset variations for each dataset: one with smoothing and the other without smoothing. For BB-MAS, WISDM and WESAD datasets, the period used in exponential moving average smoothing are 0.1-s, 0.2-s, and 0.125-s.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">DECEPTION RESULT OF FALSE DATA INJECTION ATTACK</head><p>This section evaluates the FDIA's efficacy in deceiving smart device CASs and human-centric sensing applications such as activity and emotional stress detection through accelerometer (ACC) and blood volume pulse (BVP) signals. Detailed descriptions of the models used are in Appendix A.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Evaluation of FDIA on Authentication Systems</head><p>This section investigates the question "Are continuous authentication systems (CASs) effective in detecting false data injection attacks?". We evaluate FDIA samples' efficacy in deceiving a gait-based CAS using ACC signal from BB-MAS dataset and a daily-activity motion-signal-based CAS using ACC data from WISDM dataset.</p><p>Authentication Systems: Following recent work <ref type="bibr">[1,</ref><ref type="bibr">11]</ref> on continuous ACC-based smart device authentication systems, we developed Siamese convolutional neural networks that learn to differentiate authentic user's ACC data from others. Siamese authentication models take two 10-s ACC signals as inputs: one legitimate user's reference signal and one for authentication and identifies if the inputs are from the same or different individuals. If matched, the authentication is verified, and rejected otherwise. For BB-MAS and WISDM datasets, the authentication model achieves 90% and 87% F1-scores, 93% and 89% true acceptance rates (consider FSR = 0% samples), and 87% and 84% true rejection rates on differentiating authentic vs. other individuals' signals.</p><p>FDIA on Authentication Systems: We evaluate FDIA's deception efficacy against the developed authentication systems considering different factors: forged signal ratio (FSR) and effect of smoothing operation.</p><p>As mentioned in Section 2.4, if FDIA replaces the whole legitimate sensory data stream with signals from other individuals (consider it a FSR = 100% sample), this authentication model will have a high possibility (87% on BB-MAS dataset and 84% on WISDM dataset) to detect the attack sample. Hence, the attack sensory signals will fail to reach the targeting human-centric sensing application. Therefore, instead of replacing the whole legitimate signal, we consider a stealthier FDIA, generating a mixture of legitimate and forged data, keeping both legitimate patterns and misinformation.</p><p>For BB-MAS dataset, FDIA samples are generated by inserting other individuals' gait-ACC data into the legitimate user's signal; For WISDM dataset, attack samples are generated by inserting others' activity jogging (B) and taking-stairs (C) ACC signals into the legitimate user's activity walking (A) data, noted as A/B and A/C respectively. Attack samples have FSRs ranging from 10% to 90%, and we evaluated both smoothed and non-smoothed variations. Table <ref type="table">1a</ref> and 1b show the true rejection (i.e., attack detection) rate of the authentication models on different FDIA variations from BB-MAS and WISDM dataset. On low FSR attacks samples where only a small fraction (10-20%) of the ACC signal is forged, the authentication systems successfully reject only 8-32% of the FDIA samples. This is due to the high similarity of the majority of the attack signal portions with the legitimate user's movements. As a higher ratio of forged information is injected, the authentication systems achieve a higher efficacy in capturing the attack. This evaluation demonstrates that, when the FSR is moderate (30-60%), FDIA samples have a good chance (23-73%) to deceive the authentication systems successfully; When FSR is greater than 70%, FDIA samples are very likely to be detected. Furthermore, both smoothed and non-smoothed attacks achieved similar performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Evaluation of FDIA on Human-centric Sensing Applications</head><p>This section investigates the question "Can false data injection attacks deceive human-centric sensing applications?". We evaluate FDIA samples on a human activity detection model based on accelerometer (ACC) data (Section 5.2.1) and a stress detection model based on blood volume pulse (BVP) data (Section 5.2.2).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.1">FDIA on Human Activity Detection</head><p>System. This section's evaluations are performed on the WISDM dataset.</p><p>Activity Detection System: For four activities -walking (A), jogging (B), taking-stairs (C), and kicking-balls (M), we developed binary activity detection (AD) classifiers that take a 10-s ACC signal as input. Following the state-of-the-art study DeepSense <ref type="bibr">[65]</ref>, we develop an integration of convolution (CNN) and Long Short-Term Memory (LSTM) network, named CNN-LSTM model as the AD models. Classifiers for activities A, B, C, and M have the detection accuracy of 74%, 87%, 72%, and 88%. Generated attack samples are evaluated on the corresponding AD models. E.g., A/B attack samples are generated to misinform the AD system that users are jogging while originally walking. So, the samples are evaluated by AD models of activity A and B. Percentage of these A/B attack samples detected as respective activity by the walking (A) or jogging (B) detection models are shown in Table <ref type="table">2a</ref> and<ref type="table">2b</ref>. When FSR is 60% or higher, less than 13% of the A/B samples are detected as walking (A) by the AD model of A, and more than 50% are detected as jogging (B) activity by AD model of B. Similarly, we have the results for A/M and C/B cases. For 60% or higher FSR, A/M and C/B FDIA samples are highly effective in deceiving the AD system that the target user is kicking balls (M) and jogging (B). This section's results establish that both smoothed and non-smoothed FDIA samples can successfully deceive activity detection models into inferring a targeted wrong activity. Emotional Stress Detection System: Following recent works <ref type="bibr">[24,</ref><ref type="bibr">46]</ref> which use CNN-LSTM algorithm to detect mental stress based on physiological signal, we developed a binary stress detection CNN-LSTM classifier that takes a 20-s BVP signal at each 20-s interval. The classifier achieves an F1-score of 79%, a 75% accuracy on stress detection, and a 86% TNR, meaning the accuracy of detecting non-stress signals is 86%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>FDIA on Stress Detection System:</head><p>We generate FDIA samples by inserting stressed signal portions into the target users' non-stressed BVP signal, with FSRs ranging from 10% to 90%. Table <ref type="table">3</ref> shows the BVP-based stress detection model's classification results on the generated attack samples. Both smoothed and non-smoothed attacks perform similarly in deceiving the stress detection model. With FSR 30 -40% attacks, about 30% attack samples are detected as stressed. With the increase of FSR, about 65-76% of attack samples are detected as stressed. This section's evaluation demonstrates that FDIA (specifically on moderate to higher FSRs) effectively deceives the stress detection system by generating forgery 'stress' inferences while the target user is not stressed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.3">Conclusion from FDIA Deception</head><p>Evaluation. According to our evaluation, with 50-60% FSR, FDIA samples can deceive the authentication system with 32-45% false-sample-acceptance-rate, the activity detection models with 35-77% wrong-targeted-activity-inference rate, and the stress detection model with 42-55% misclassification rate. With lower FSR, FDIA samples perform highly in deceiving the authentication but poorly in deceiving the sensing applications. With higher FSR, FDIA samples perform poorly in deceiving the authentication system, </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Background Discussion</head><p>Siamese Neural Network (SNN). <ref type="bibr">[7,</ref><ref type="bibr">68]</ref> employs a unique structure to naturally compare a pair of inputs in terms of their semantic similarity or dissimilarity. This paper leverages SNN to distinguish input sensory signal samples of different individuals. (Detailed discussion in Appendix D)</p><p>Multiple Instance Learning (MIL). is a weakly supervised learning problem where, the input of a classifier is considered as a bag of instances, &#119861; = {&#119909; 1 , &#119909; 2 , ..., &#119909; &#119899; }. Instances exhibit neither dependency nor ordering among each other. Each bag of instances &#119861; has an associated single binary label &#119884; &#8712; {0, 1} known during training. However, individual labels of the instances within a bag remain unknown. The assumption of a MIL problem is:</p><p>According to the MIL assumption, known labels are attached to the bags, where a positive bag has label &#119884; = 1, and a negative bag has label &#119884; = 0. A negative bag has at least one negative instance (i.e., &#8707;&#119909; &#119895; &#8712; &#119861;, &#119910; &#119895; = 0), and a positive bag contains positive instances only (i.e., &#8704;&#119909; &#119895; &#8712; &#119861;, &#119910; &#119895; = 1). This assumption generates an asymmetry from a learning perspective as all instances in a positive bag can be uniquely assigned a positive label, which cannot be done for a negative bag (which may contain both positive and negative instances). Thus, the relationship between bag label &#119884; and instance label &#119910; &#119895; is: &#119884; = &#119898;&#119894;&#119899;{&#119910; &#119895; }. The Siamese-MIL classifier model training approach that adapts MIL mechanism is discussed in the following section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Siamese-MIL Approach</head><p>This section presents a novel MIL bag &amp; instance generation mechanism for FDIA detection (Section 6.2.1), Siamese-MIL attack detection approach (Section 6.2.2), and the Siamese-MIL training approach (Section 6.2.3).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.1">Siamese-MIL Bag and Instance Generation.</head><p>In the FDIA, sensory data from other individuals is mixed with the legitimate user's data. We leverage this characteristic to detect the attack.</p><p>Siamese-MIL takes a &#119882; -s sample in the form of a MIL bag &#119861; to identify attacks. If the attack generated forged data is present in the &#119882; -s sample, the MIL bag label is negative (i.e., 0), and the label is positive (i.e., 1) otherwise. The novel contribution of the paper is how the bag instances are constructed. We segment the &#119882; -s sample into &#119897; segments &#119909; &#119896; with overlap rate &#119877;, where &#119896; = 1, 2, . . . , &#119897;, and the length of each &#119909; &#119896; is '&#119881; ' seconds. We compose</p><p>2!(&#119897;-2!) (i.e., combination) pairs of small segments (&#119909; &#119894; , &#119909; &#119895; ), &#119894; &#8800; &#119895;, ensuring that each small segment &#119909; &#119894; is once paired with all other segments. These pairs are the instances of bag &#119861; representing the &#119882; -s sample.</p><p>According to the definition (Section 2.4), not all segments &#119909; &#119896; in a &#119882; -s FDIA sensory sample will contain the same individual's data. In a bag &#119861; (representing &#119882; -s sample), if at least one segment pair (&#119909; &#119894; , &#119909; &#119895; ) is such that they are not containing data from same individual, the instance-level label of that pair is 0, hence the bag label &#119884; is 0, meaning the bag is containing injected attack synthetic data.</p><p>6.2.2 Siamese-MIL FDIA Detection Mechanism. Algorithm 1 demonstrates the FDIA detection approach. An SNN is trained to detect FDIA ('net' in Algorithm 1) that takes an instance (i.e., a (&#119909; &#119894; , &#119909; &#119895; ) pair) as input and identifies if the small segments are from the same individual. If yes, the inferred instance-level similarity score is &gt; 0.5 (line 6-10). The label of the bag is determined by the pair with the lowest similarity score. The lowest score &gt; 0.5 means all instance-pairs are classified as positive (i.e., samples from same individual), inferring the bag as positive and the &#119882; -s sample is legitimate; otherwise, the algorithm indicates that there exists at least one negative instance-pair (i.e., samples from different individuals) in the bag, inferring the bag label as negative and the &#119882; -s sample is corrupted due to FDIA.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 1 Siamese-MIL Attack Detection</head><p>Require: a Siamese network (&#119899;&#119890;&#119905;), a &#119882; -s sample (&#119904;), number of pairs to be extracted from &#119904; (&#119897;), small segment size (&#119881; -s), overlap rate (&#119877;) 1: Initialize &#119894; = 0, &#119898;&#119894;&#119899;&#119878;&#119888;&#119900;&#119903;&#119890; = 1, &#119887;&#119886;&#119892; = {} 2: procedure Bag Instance Generation(&#119887;&#119886;&#119892; = {}) Rule of Three Approximation of MIL Paradigm. To further demonstrate the reliable and effective attack detection through Siamese-MIL, we make some approximations. Suppose our SNN has a true negative rate '&#119901;'. We segment a corrupted &#119882; -s signal sample into &#119897; segments, where &#119898; segments contain corrupted other individual's data. We can consider a segment pair (&#119909; &#119894; , &#119909; &#119895; ) impure if it contains different individuals' data, and there will be</p><p>impure pairs in the bag. A negative bag is misclassified if and only if all impure instance-pairs are misclassified. Considering the evaluation of each instance-pair is mutually independent, the probability of misclassifying a negative bag is (1 -&#119901;) &#119866; . According to the Rule of Three <ref type="bibr">[23]</ref>, &#119901; has to be in the range of [0, 3  &#119866; ] for negative bag misclassification with 95% confidence. Oppositely, if &#119901; &gt; 3 &#119866; , we have 95% confidence that the negative bags (attacked samples) will be classified correctly. Consider the detection of the lowest FSR level 10%. We segment the &#119882; =10-s sensory signal into nine 2-s segments with 50% overlap. That means, we will have 36 instance-pairs. If 2 segments (most likely) contain the forged data, &#119866; = 14. Therefore, if the SNN has &#119901; &gt; 3 14 = 21.4%, the Siamese-MIL will detect the attack successfully with 95% confidence. Though it is an ideal approximation, it gives the insights about Siamese-MIL's effective attack detection capability. We evaluate the concept further in Section 7.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.3">Siamese-MIL Training</head><p>Approach. We define a loss function according to the MIL training paradigm, where the loss &#119864; &#119887; is defined by Equation <ref type="formula">2</ref>.</p><p>Here, &#119873; is the training batch size (i.e., number of bags or &#119882; -s input windows in a batch), &#119904; &#119895; &#119894; is the similarity score of &#119895;-th instance-pair of the &#119894;-th bag, &#119898; is number of instances in a bag and &#119884; &#119894; is the label of &#119894;-th bag. The loss function (Equation <ref type="formula">2</ref>) penalizes a bag &#119861; &#119894; on the difference between bag label (&#119884; &#119894; ) and the lowest instance-level score (i.e., similarity score discussed) in the bag.</p><p>In FDIA detection task, the sensory signal from a &#119882; -s detection window is weakly labeled, where only the bag-level label is available. Since the instance-pair-level labels are not available during training, supervised training approaches consider labels of all the instance-pairs of a negative bag as negative. Due to such erroneous instance-pair-level label assumption, the supervised learning approaches fail to achieve an optimal solution.</p><p>In Siamese-MIL training, the weights are updated according to the loss on the instance-pair whose corresponding similarity score is minimum among all the instance-pairs in the bag. If at least one instance-pair of a negative bag (i.e., &#119884; &#119894; = 0) has similarity score 0, the loss value on the concerned bag &#119861; &#119894; is zero and the weights of the network will not be updated. Therefore, the Siamese-MIL training avoids weight updates due to positive instance-pairs in the negative training samples (or bags). For positive bag training, if all the instance-pairs are perfectly predicted as positive, then only the loss value on the concerned bag is zero and the weights of the network are not updated.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">EVALUATION OF ATTACK DETECTION USING SIAMESE-MIL</head><p>This section investigates the question " What is the performance of the Siamese-MIL detection approach against smart device sensor false data injection attacks?". We evaluate the attack detection performance of Siamese-MIL on smoothed gait-ACC FDIA samples used on authentication (Section 5.1), motion-ACC FDIA samples used on activity detection (Section 5.2.1) and BVP FDIA samples used on stress detection (Section 5.2.2) models discussed above. An equal number of attack samples were generated for each FSR, ranging from 10% to 90%. The effect of smoothing operation and different FSR variations of FDIA on the Siamese-MIL's performance will be discussed in Section 8. The performance of Siamese-MIL is compared to a baseline CNN-LSTM, following the papers <ref type="bibr">[12,</ref><ref type="bibr">20,</ref><ref type="bibr">32,</ref><ref type="bibr">65,</ref><ref type="bibr">66]</ref> that address attacks and sensory signals detection. Detailed descriptions of the baseline and SNN models which are used in the evaluations are provided in Appendix A.</p><p>Following sections discuss the beneficial parameters of the Siamese-MIL approach and attack detection performance on gait-ACC (Section 7.1), motion-ACC (Section 7.2), and BVP (Section 7.3) samples.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.1">Attack Detection Performance on Gait-ACC FDIA Samples from BB-MAS dataset</head><p>Beneficial Parameter Configurations: Siamese-MIL takes a &#119882; = 10-s 3-axis ACC sample (3 &#215; 1000 dimension) as input to assess FDIA. We segment the sample into &#119881; = 2-s small segments with overlap rate &#119877; = 50% to compose the instance-pairs, then extract 36 instance-pairs (i.e. 9 small segments) from the sample.&#119882; , &#119881; , and &#119877; are hyper-parameters, and an ablation study on Siamese-MIL bag parameters is discussed in Appendix B.</p><p>Attack Detection Performance: Table <ref type="table">4a</ref> shows the attack detection performance of Siamese-MIL and baseline CNN-LSTM model on smoothed gait-ACC FDIA samples. Siamese-MIL significantly outperforms the baseline. The Siamese-MIL has a higher recall, implicating better legitimate signal assessment performance. Notably, the Siamese-MIL has much better attack detection performance (11% higher precision and 17% higher TNR) than the baseline. This is due to the MIL characteristic, where only one negative instance-pair is needed to identify a negative bag (i.e., attack sample). Thus, even if the SNN alone is not highly accurate in distinguishing two samples (i.e., instance-pairs) from different individuals, with enough instance-pairs within a bag, Siamese-MIL evaluation gets a higher chance of correctly identifying an attack sample (Section 6.2.2).</p><p>Insights on SNN's performance: We further evaluate the SNN's performance on differentiating instance-pairs from the same or different individuals. We generate pairs consisting of segments from a legitimate person (positive instance-pair) and pairs containing segments from different individuals (negative instance-pair). The SSN trained on smoothed gait-ACC data achieves a very high TPR of 98.95% and a lower TNR of 46.34%. The result confirms that SNN and Siamese-MIL are following the MIL and Rule of Three assumptions (discussed in Section 6.2.2). In the MIL framework, a positive bag is misclassified with even one misclassification of a positive instance-pair. Hence, a high TPR is required by the SNN to achieve high legit signal detection (i.e., high recall) performance by Siamese-MIL. Additionally, according to the Rule of Three assumption, above 21.4% TNR is needed to achieve high attack detection accuracy (i.e., for 36 instance-pairs MIL bags), where our SNN achieves a 46.34% TNR, resulting in high attack detection performance by the Siamese-MIL.   <ref type="table">5a</ref> and<ref type="table">5b</ref> show the attack detection performance of the baseline and Siamese-MIL. Siamese-MIL achieves 5.9%, 12.4%, 9.9%, 1.8% and 7.8% higher F1-scores in A/B, A/C, A/M, C/B, C/A attack cases. Notably, it achieves slightly better precision and TNR, and significantly higher recall than the baseline. This evaluation indicates that, in general, the Siamese-MIL has a similar attack detection performance as CNN-LSTM, but it is significantly less likely to misclassify a legitimate input.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.3">Attack Detection Performance on BVP FDIA Samples from WESAD Dataset</head><p>Beneficial Parameter Configurations: Our approach takes a &#119882; = 20-s BVP sample (1 &#215; 1280 dimension) as input. The input is divided into &#119881; = 4-s small segments (1 &#215; 256 dimension) with overlap rate &#119877; = 50%. Attack Detection Performance: Table <ref type="table">4b</ref> shows the attack detection performance of Siamese-MIL and baseline on smoothed BVP FDIA samples. The Siamese-MIL outperforms the baseline and achieves a high recall and TNR, demonstrating that Siamese-MIL is highly effective in detecting both the legitimate and attack BVP signals.</p><p>In conclusion: According to Section 7's evaluation, Siamese-MIL is highly effective in detecting FDIA (high precision and TNR) and legitimate (high recall) gait-ACC, motion-ACC, and BVP signals and outperforms the baseline CNN-LSTM models significantly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">EVALUATION ON FDIA CHARACTERISTICS</head><p>The evaluations in Sections 5.1 and 5.2 demonstrate that FDIA with moderate FSR has a good chance to deceive authentication systems and FDIA (specifically on moderate to higher FSRs) effectively deceives human-centric sensing systems, therefore, this section investigates the question "How does the FSR of FDIA samples and smoothing operation affect the Siamese-MIL's performance?" by evaluating the Siamese-MIL's performance against FSR ranging from 10-90% and smoothing attack variations on all three datasets. Additionally, we evaluate Siamese-MIL's performance against attack samples when injected signals are from the same individual or different individuals on the BB-MAS and WESAD datasets. Evaluation on FSR Variations. Tables 6a, 6b and 7 display Siamese-MIL's attack detection rate (TNR) with different variations on the three datasets. According to our evaluation, Siamese-MIL achieves high attack detection rate (TNR) on 20 -80% FSR, where performance drops slightly for 10% or 90% FSR attacks.</p><p>The reason is, on 10% or 90% FSR attack samples, only a 1-s data segment (out of 10-s) is different than the rest. Since our Siamese-MIL instances are 2-s with 50% overlaps, at least one instance (out of 9) will contain a different signal, and the number of mismatched input instance-pairs is the lowest (8 out of 36). Compared to that, on 40% or 60% FSR attack samples, at least 4-s data segment (out of 10-s) is different, meaning at least 3 (out of 9) instances will contain different signals than the rest. Hence, the number of mismatched input instance-pairs is a minimum of 18 (out of 36). Therefore, an SNN with lower mismatched pair detection accuracy would have a significantly high probability to correctly detect at least one mismatched instance-pair (from 18 instance-pairs) on the 40% or 60% FSR attack samples, compared to the 10% or 90% FSR attack samples (from 8 instance-pairs).</p><p>Notably, the Siamese-MIL achieves high attack detection performance against the BVP FDIA samples with all FSR variations. BVP signals during high and low emotional stress are distinctively different. Hence, even on high and low FSR, the forged and legit signal portions are easily distinguishable.</p><p>Insights from the Siamese-MIL's Performance on the Activity Misleading FIDA:. As shown in Table <ref type="table">7</ref>, on A/B and C/B attack samples, Siamese-MIL achieves 95-97% TNR on the 90% FSR samples where only 10% ACC data is from the legit user. This is caused by the higher ACC signal amplitude differences between activity B (i.e., jogging) and others. On 10% FSR attack, Siamese-MIL achieves relatively lower performance since a small high amplitude signal segment may even present in walking or taking-stairs activity signals, making the attack detection task difficult. On A/M attack samples, Siamese-MIL performs similarly. But in A/C and C/A attack cases, taking-stairs (C) activity consists of some walking and some climbing-steps, making them very similar to walking (A). Hence, on 10% or 90% FSR attacks, where only 10% of the signal is different from the rest, attack detection is challenging. Nevertheless, Siamese-MIL still achieves 70 -75% TNR on the 10% or 90% FSR attack. Overall, Siamese-MIL achieves a higher attack detection rate on FDIA samples where the legit and injected signal traits are highly dissimilar.</p><p>Evaluation on Smoothing Operation. According to the Tables 6a, 6b and 7 results, Siamese-MIL performs similarly on smoothed and non-smoothed attack samples. Sensor data (i.e., ACC, BVP) contains noisy fluctuations similar to the forge-and-legit-signal-boundary-mismatches in the attack samples. So, even in the non-smoothed attack samples, classifiers cannot differentiate the signal fluctuations are due to noise or FDIA attack.</p><p>This section also evaluates whether the injected signals from the same or different individuals affect the Siamese-MIL's performance. We evaluated two kinds of attack samples: (1) injected signals are from the same person, and (2) injected signals are from multiple people, on BB-MAS and WESAD datasets. Siamese-MIL achieves similar performance on both variations (Table <ref type="table">8</ref>). It is due to the MIL paradigm -only one dissimilar instance-pair is needed to determine an attack. Though with the increase of injected signals' sources in an attack sample, the number of dissimilar instance-pairs (i.e., containing data from different individuals) increases, Siamese-MIL only achieves a marginally higher TNR against the multiple people injected attack samples. Hence, the number of sources of injected forged signals does not significantly affect the Siamese-MIL's performance.</p><p>In conclusion: According to the evaluation, low and high FSRs make FDIA detection harder. Smoothing operation or the number of sources of the injected forged signals in an attack sample do not significantly influence attack detection. However, Siamese-MIL performs consistently higher (on all FSRs) against the FDIA samples where the smart wearables, such as Fitbit, Garmin, and Jawbone sensors; Rahman et al. <ref type="bibr">[44]</ref> have developed tools such as Fit Bite and GarMax to eavesdrop and modify fitness sensor data on smart wearable devices as well. BlueDoor <ref type="bibr">[62]</ref> developed by Wang et al. can read and write sensor data on Bluetooth Low Energy (BLE) devices. Cayre et al. <ref type="bibr">[10]</ref> describes an attack called InjectaBLE, which allows injecting malicious traffic into an existing BLE connection. Besides accessing communication channels, attackers may manipulate sensors remotely via malware. Mohamed et al. <ref type="bibr">[33]</ref> present a framework called SMASheD, which can sniff and manipulate many of the Android's restricted sensors using a malicious app with only the INTERNET permission; Spy-sense <ref type="bibr">[16]</ref> is a malicious app that exploits the active memory region of sensors and relays the collected information. It can delete or modify sensor data. Our discussion evidentiates that FDIA can be performed through accessing wireless communication channels and via malwares. Instead of focusing on prohibiting such sensory data manipulation, this paper develops an FDIA detection approach that will prohibit any attack sample from reaching human-centric sensing applications.</p><p>Though increasingly more studies are addressing the threat of FDIA in various scenarios, to the best of our knowledge, this paper is the first to address a targeted FDIA that can deceive continuous authentication systems (CASs) and misinform multiple human-centric sensing applications simultaneously.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="11">STUDY SUMMARY AND DISCUSSION</head><p>The identified insights, observation, results and limitation of the presented study are discussed below:</p><p>Siamese-MIL follows the MIL paradigm and Rule of Three assumptions. According to the rule of three approximation, only a 21.4% or more TNR by the SNN is required to achieve high FDIA detection accuracy (for 36 instance-pairs MIL bags). Our evaluation in section 7.1 shows the developed SNN has a TNR of 46.34% and a TPR of 98.95%, resulting in a high attack detection performance by the Siamese-MIL. These results confirm that the Siamese network and Siamese-MIL framework follow the MIL paradigm and Rule of Three assumptions.</p><p>Low and high FSR make FDIA detection harder. The reason is, only the 10% signal portion of 10% and 90% FSR samples are different from the rest, and the SNN needs to be more accurate (on avg. 2+ times accurate for a 36 instance-pair MIL bag) to capture such mismatch. However, according to Sections 5.1 &amp; 5.2, the 90% FSR samples are the easiest to be rejected by the CAS; and 10% FSR samples are not highly effective in deceiving human-centric sensing systems. Hence, the relatively lower performance of Siamese-MIL is less impactful.</p><p>Smoothing does not have a great influence on attack detection. According to Section 8, Siamese-MIL's performance on the smoothed and non-smoothed attack data are quite similar. Sensory data of smart devices comprises noisy fluctuations in the original signals similar to the attack sample's forge and legit signal boundary mismatches. Hence, even in the non-smoothed attack samples, detection classifiers cannot differentiate the signal fluctuations are due to noise or attack. Thus, the detection approaches perform similarly against both variations.</p><p>Effect of the signal trait. According to Section 8, Siamese-MIL achieves relatively lower TNR when the injected signal's trait is highly similar to the legitimate signal's trait. E.g., in A/C and C/A attack scenarios. Activity C (taking-stairs) contains some signals of A (walking) and climbing steps, making them very similar to A, so, Siamese-MIL only achieves a lower TNR of 81 -88%.</p><p>FDIA's potential adverse effect and mitigation through Siamese-MIL. Tables <ref type="table">2</ref> and<ref type="table">3</ref> evaluations show that FDIA with 40% or higher FSRs can effectively deceive activity detection and stress detection models. Such deception may result in wrong follow-up interventions leading to health and monetary loss of the victim. Siamese-MIL and CAS integration (Table <ref type="table">11</ref> in Appendix C) achieves a high FDIA detection performance against all presented attack variations, hence would be able to protect the smart-device users from such adverse effects.</p><p>The paper focuses on FDIA on smart device human-centric sensing applications. Our evaluation demonstrates that FDIA with 50-60% 'forged signal ratio (FSR)' can effectively deceive both the authentication and human-centric applications, generating a critically adverse effect on the victim. We presented a novel FDIA detection framework (Siamese-MIL) that generates a unique signal representation suitable for formulating the FDIA detection task as a MIL problem and integrates Siamese network and MIL train-test paradigm for effective attack detection. Our exhaustive evaluation on three datasets (BB-MAS <ref type="bibr">[42]</ref>, WISDM <ref type="bibr">[64]</ref> and WESAD <ref type="bibr">[53]</ref>), two modalities (i.e., accelerometer, and blood volume pulse) demonstrates the Siamese-MIL's generalizability and high efficacy against all variations of FDIA attacks. The Siamese-MIL FDIA detection approach is designed to extend the conventional authentication systems, prohibiting any attack signal to reach the human-centric applications. Such integration achieves a high attack detection accuracy on all possible attack variations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B APPENDIX: ABLATION STUDY OF MIL-BAG CONFIGURATIONS</head><p>We evaluate different MIL-bag representation parameters (Section 6.2.1) -MIL instance (i.e., small segment) size &#119881; , and overlap rate &#119877;. We evaluated &#119881; as the 10%, 20% and 30% of the input window length &#119882; , and &#119877; as 25% or 50%. The evaluations are performed on smoothed gait-ACC FDIA data from the BB-MAS dataset.</p><p>The evaluation on different instance sizes &#119881; and overlap rate &#119877; are shown in table <ref type="table">10</ref>. According to the evaluation results, the 50% overlap rate gives better performance, and 1-s and 2-s instance size &#119881; with &#119877; = 50% provides similarly high performance. When &#119877; = 50% and &#119881; = 1-s, there are 171 instance-pairs in a MIL bag representation, compared to 36 instance-pairs in a MIL bag when &#119877; = 50% and &#119881; = 2-s. That means, with &#119877; = 50% and &#119881; = 1-s hyper-parameters, the Siamese-MIL performs 4.75 times more computations (i.e., SNN instance-pairs comparisons) to generate similar attack detection performance (F1-score), compared to the &#119877; = 50% and &#119881; = 2-s hyper-parameter configuration Siamese-MIL. Since the Siamese-MIL attack detection approach needs to be real-time executable on the computationally constraint smart devices, we use &#119877; = 50% and &#119881; = &#119882; * 20% as the optimal hyper-parameter configuration. C APPENDIX: MITIGATION FOR THE ATTACK: AN EXTENSION OF AUTHENTICATION SYSTEM Section 7 and 8 show that the developed Siamese-MIL method effectively detects FDIA samples in different scenarios. This section will discuss the mitigation strategy for the FDIA.</p><p>As discussed in Section 2, sensory signals are first verified by a continuous authentication system (CAS) before reaching any human-centric sensing application. Therefore, we can use the Siamese-MIL as a CAS extension and verify a signal's authenticity by voting. If any of them considers a signal is an attack sample, it will be rejected. To evaluate such integration's performance, we fuse the authentication systems developed in Section 5.1 with the Siamese-MIL models for BB-MAS and WISDM dataset. Evaluation of the mitigation strategy on different FDIA variations is shown in table 11a and 11b. Compared to table 1a, 6a, 1b and 7, this approach mitigates the lower performance of Siamese-MIL on high FSR (90%) and the lower performance of authentication system on lower FSR (10-60%), achieving consistently high TNR on all FDIA variations. Furthermore, similar to previous evaluations, the integrated system performed similarly to smoothed and non-smoothed attack samples. Therefore, with the integration of Siamese-MIL, the CAS is highly robust against all variations of the FDIA.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D APPENDIX: BACKGROUND DISCUSSION ON SIAMESE NEURAL NETWORK (SNN)</head><p>Siamese Neural Network (SNN) <ref type="bibr">[7,</ref><ref type="bibr">68]</ref> employs a unique structure to naturally compare a pair of inputs in terms of their semantic similarity or dissimilarity. Two identical sub-networks generate the embedding representations of the respective input instances. The sub-networks are joined together by a distance function that computes how close or far-apart the input pairs are in the embedding space. SNNs have been widely used in meta-learning <ref type="bibr">[15,</ref><ref type="bibr">67]</ref> due to their powerful discriminative capability that generalizes not just to new data but to entirely new classes of data from unknown distributions. Hence, SNNs are suitable for human-centric sensing attack detection tasks, where very few or no example of the target user's data is available.</p><p>Each individual has a unique behavioral or physiological pattern that is conveyed to their sensory signal information. Sensory from each can be categorized as a single class. This paper leverages SNN structure to to distinguish input sensory signal samples of different individuals (i.e., in FDIA detection). As shown in Figure <ref type="figure">2</ref>, the sub-networks take two sensory-signal (i.e., ACC, BVP) input samples (&#119909; 1 and &#119909; 2 ) and generate encoding representations. The &#119871; 1 distance between the two encodings is computed, and the similarity score is obtained by passing the distance through a dense linear layer with a Sigmoid unit. A pair of signals is considered from the same individual if the similarity score &gt; 0.5.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022. Combating False Data Injection Attacks on Human-Centric Sensing Applications &#8226; 83:3</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_1"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_2"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022.Combating False Data Injection Attacks on Human-Centric Sensing Applications &#8226; 83:5</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_3"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022. Combating False Data Injection Attacks on Human-Centric Sensing Applications &#8226; 83:7</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_4"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022. Combating False Data Injection Attacks on Human-Centric Sensing Applications &#8226; 83:9</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_5"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022. Combating False Data Injection Attacks on Human-Centric Sensing Applications &#8226; 83:11</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_6"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022. Combating False Data Injection Attacks on Human-Centric Sensing Applications &#8226; 83:13</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_7"><p>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No. 2, Article 83. Publication date: June 2022. Combating False Data Injection Attacks on Human-Centric Sensing Applications &#8226; 83:15</p></note>
		</body>
		</text>
</TEI>
