This paper identifies a structural property of data distributions that enables deep neural networks to learn hierarchically. We define the ``staircase'' property for functions over the Boolean hypercube, which posits that high-order Fourier coefficients are reachable from lower-order Fourier coefficients along increasing chains. We prove that functions satisfying this property can be learned in polynomial time using layerwise stochastic coordinate descent on regular neural networks -- a class of network architectures and initializations that have homogeneity properties. Our analysis shows that for such staircase functions and neural networks, the gradient-based algorithm learns high-level features by greedily combining lower-level features along the depth of the network. We further back our theoretical results with experiments showing that staircase functions are learnable by more standard ResNet architectures with stochastic gradient descent. Both the theoretical and experimental results support the fact that the staircase property has a role to play in understanding the capabilities of gradient-based learning on regular networks, in contrast to general polynomial-size networks that can emulate any Statistical Query or PAC algorithm, as recently shown.
more »
« less
ideanet: Integrating Data Exchange and Analysis for Networks ('ideanet')
A suite of convenient tools for social network analysis geared toward students, entry-level users, and non-expert practitioners. ‘ideanet’ features unique functions for the processing and measurement of sociocentric and egocentric network data. These functions automatically generate node- and system-level measures commonly used in the analysis of these types of networks. Outputs from these functions maximize the ability of novice users to employ network measurements in further analyses while making all users less prone to common data analytic errors. Additionally, ‘ideanet’ features an R Shiny graphic user interface that allows novices to explore network data with minimal need for coding.
more »
« less
- Award ID(s):
- 2140024
- PAR ID:
- 10578701
- Publisher / Repository:
- Comprehensive R Archive Network
- Date Published:
- Edition / Version:
- 1.0.0
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Cell–cell interactions (CCI) play significant roles in manipulating biological functions of cells. Analyzing the differences in CCI between healthy and diseased conditions of a biological system yields greater insight than analyzing either conditions alone. There has been a recent and rapid growth of methods to infer CCI from single-cell RNA-sequencing (scRNA-seq), revealing complex CCI networks at a previously inaccessible scale. However, the majority of current CCI analyses from scRNA-seq data focus on direct comparisons between individual CCI networks of individual samples from patients, rather than “group-level” comparisons between sample groups of patients comprising different conditions. To illustrate new biological features among different disease statuses, we investigated the diversity of key network features on groups of CCI networks, as defined by different disease statuses. We considered three levels of network features: node level, as defined by cell type; node-to-node level; and network level. By applying these analysis to a large-scale single-cell RNA-sequencing dataset of coronavirus disease 2019 (COVID-19), we observe biologically meaningful patterns aligned with the progression and subsequent convalescence of COVID-19.more » « less
-
Researchers have investigated a number of strategies for capturing and analyzing data analyst event logs in order to design better tools, identify failure points, and guide users. However, this remains challenging because individual- and session-level behavioral differences lead to an explosion of complexity and there are few guarantees that log observations map to user cognition. In this paper we introduce a technique for segmenting sequential analyst event logs which combines data, interaction, and user features in order to create discrete blocks of goal-directed activity. Using measures of inter-dependency and comparisons between analysis states, these blocks identify patterns in interaction logs coupled with the current view that users are examining. Through an analysis of publicly available data and data from a lab study across a variety of analysis tasks, we validate that our segmentation approach aligns with users’ changing goals and tasks. Finally, we identify several downstream applications for our approach.more » « less
-
Resource flexing is the notion of allocating resources on-demand as workload changes. This is a key advantage of Virtualized Network Functions (VNFs) over their non-virtualized counterparts. However, it is difficult to balance the timeliness and resource efficiency when making resource flexing decisions due to unpredictable workloads and complex VNF processing logic. In this work, we propose an Elastic resource flexing system for Network functions VIrtualization (ENVI) that leverages a combination of VNF-level features and infrastructure-level features to construct a neural-network-based scaling decision engine for generating timely scaling decisions. To adapt to dynamic workloads, we design a window-based rewinding mechanism to update the neural network with emerging workload patterns and make accurate decisions in real time. Our experimental results for real VNFs (IDS Suricata and caching proxy Squid) using workloads generated based on real-world traces, show that ENVI provisions significantly fewer (up to 26%) resources without violating service level objectives, compared to commonly used rule-based scaling policies.more » « less
-
null (Ed.)Fast networks and the desire for high resource utilization in data centers and the cloud have driven disaggregation. Application compute is separated from storage, but this leads to high overheads when data must move over the network for simple operations on it. Alternatively, systems could allow applications to run application logic within storage via user-defined functions. Unfortunately, this ties provisioning and utilization of storage and compute resources together again. We present a new approach to executing storage-level functions in an in-memory key-value store that avoids this problem by dynamically deciding where to execute functions over data. Users write storage functions that are logically decoupled from storage, but storage servers choose where to run invocations of these functions physically. By using a server-internal cost model and observing function execution, servers choose to directly run inexpensive functions, while preferring to execute functions with high CPU-cost at client machines. We show that with this approach storage servers can reduce network request processing costs, avoid server compute bottlenecks, and improve aggregate storage system throughput. We realize our approach on an in-memory key-value store that executes 3.2 million strict serializable user-defined storage functions per second with 100 us response times. When running a mix of logic from different applications, it provides throughput better than running that logic purely at storage servers (85% more) or purely at clients (10% more). For our workloads, it also reduces latency (up to 2x) and transactional aborts (up to 33%) over pure client-side execution.more » « less
An official website of the United States government
