Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Researchers across various fields seek to understand causal relationships but often find controlled experiments impractical. To address this, statistical tools for causal discovery from naturally observed data have become crucial. Non-linear regression models, such as Gaussian process regression, are commonly used in causal inference but have limitations due to high costs when adapted for secure computation. Support vector regression (SVR) offers an alternative but remains costly in an Multi-party computation context due to conditional branches and support vector updates. In this paper, we propose Aitia, the first two-party secure computation protocol for bivariate causal discovery. The protocol is based on optimized multi-party computation design choices and is secure in the semi-honest setting. At the core of our approach is BSGD-SVR, a new non-linear regression algorithm designed for MPC applications, achieving both high accuracy and low computation and communication costs. Specifically, we reduce the training complexity of the non-linear regression model from approximately from O (𝑁^3) to O (𝑁^2) where 𝑁 is the number of training samples. We implement Aitia using CrypTen and assess its performance across various datasets. Empirical evaluations show a significant speedup of 3.6× to 340× compared to the baseline approach.more » « lessFree, publicly-accessible full text available October 14, 2025
-
Humans often use natural language instructions to control and interact with robots for task execution. This poses a big challenge to robots that need to not only parse and understand human instructions but also realise semantic understanding of an unknown environment and its constituent elements. To address this challenge, this study presents a vision-language model (VLM)-driven approach to scene understanding of an unknown environment to enable robotic object manipulation. Given language instructions, a pretrained vision-language model built on open-sourced Llama2-chat (7B) as the language model backbone is adopted for image description and scene understanding, which translates visual information into text descriptions of the scene. Next, a zero-shot-based approach to fine-grained visual grounding and object detection is developed to extract and localise objects of interest from the scene task. Upon 3D reconstruction and pose estimate establishment of the object, a code-writing large language model (LLM) is adopted to generate high-level control codes and link language instructions with robot actions for downstream tasks. The performance of the developed approach is experimentally validated through table-top object manipulation by a robot.more » « lessFree, publicly-accessible full text available August 30, 2025
-
Free, publicly-accessible full text available June 21, 2025
-
Human-Robot Collaboration (HRC) aims to create environments where robots can understand workspace dynamics and actively assist humans in operations, with the human intention recognition being fundamental to efficient and safe task fulfillment. Language-based control and communication is a natural and convenient way to convey human intentions. However, traditional language models require instructions to be articulated following a rigid, predefined syntax, which can be unnatural, inefficient, and prone to errors. This paper investigates the reasoning abilities that emerged from the recent advancement of Large Language Models (LLMs) to overcome these limitations, allowing for human instructions to be used to enhance human-robot communication. For this purpose, a generic GPT 3.5 model has been fine-tuned to interpret and translate varied human instructions into essential attributes, such as task relevancy and tools and/or parts required for the task. These attributes are then fused with perceived on-going robot action to generate a sequence of relevant actions. The developed technique is evaluated in a case study where robots initially misinterpreted human actions and picked up wrong tools and parts for assembly. It is shown that the fine-tuned LLM can effectively identify corrective actions across a diverse range of instructional human inputs, thereby enhancing the robustness of human-robot collaborative assembly for smart manufacturing.more » « less
-
Abstract Nitrogen (N) is a limiting nutrient for primary productivity in most terrestrial ecosystems, but whether N limitation is strengthening or weakening remains controversial because both N sources and sinks are increasing in magnitude globally. Temperate marshes are exposed to greater amounts of external N inputs than most terrestrial ecosystems and more than in preindustrial times owing to their position downstream of major sources of human‐derived N runoff along river mouths and estuaries. Simultaneously, ecosystem N demand may also be increasing owing to other global changes such as rising atmospheric [CO2]. Here, we used interannual variability in external drivers and variables related to exogenous supply of N, along with detailed assessments of plant growth and porewater biogeochemistry, to assess the severity of N‐limitation, and to determine its causes, in a 14‐year N‐addition × elevated CO2experiment. We found substantial interannual variability in porewater [N], plant growth, and experimental N effects on plant growth, but the magnitude of N pools through time varied independently of the strength of N limitation. Sea level, and secondarily salinity, related closely to interannual variability in growth of the dominant plant functional groups which drove patterns in N limitation and in porewater [N]. Experimental exposure of plants to elevated CO2and years with high flooding strengthened N limitation for the sedge. Abiotic variables controlled plant growth, which determined the strength of N limitation for each plant species and for ecosystem productivity as a whole. We conclude that in this ecosystem, which has an open N cycle and where N inputs are likely greater than in preindustrial times, plant N demand has increased more than supply.more » « less
-
An algorithm is proposed to encode low-density parity-check (LDPC) codes into codewords with a non-uniform distribution. This enables power-efficient signalling for asymmetric channels. We show gains of 0.9 dB for additive white Gaussian noise (AWGN) channels with on-off keying modulation using 5G LDPC codes.more » « less
-
An expurgating linear function (ELF) is an outer code that disallows low-weight codewords of the inner code. ELFs can be designed either to maximize the minimum distance or to minimize the codeword error rate (CER) of the expurgated code. A list-decoding sieve can efficiently identify ELFs that maximize the minimum distance of the expurgated code. For convolutional inner codes, this paper provides analytical distance spectrum union (DSU) bounds on the CER of the concatenated code. For short codeword lengths, ELFs transform a good inner code into a great concatenated code. For a constant message size of K = 64 bits or constant codeword blocklength of N = 152 bits, an ELF can reduce the gap at CER 10−6 between the DSU and the random-coding union (RCU) bounds from over 1 dB for the inner code alone to 0.23 dB for the concatenated code. The DSU bounds can also characterize puncturing that mitigates the rate overhead of the ELF while maintaining the DSU-to-RCU gap. List Viterbi decoding guided by the ELF achieves maximum likelihood (ML) decoding of the concatenated code with a sufficiently large list size. The rate-K/(K+m) ELF outer code reduces rate and list decoding increases decoder complexity. As SNR increases, the average list size converges to 1 and average complexity is similar to Viterbi decoding on the trellis of the inner code. For rare large-magnitude noise events, which occur less often than the FER of the inner code, a deep search in the list finds the ML codeword.more » « less