skip to main content

Search for: All records

Creators/Authors contains: "Lee, C."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available January 1, 2023
  2. Models recently used in the literature proving residual networks (ResNets) are better than linear predictors are actually different from standard ResNets that have been widely used in computer vision. In addition to the assumptions such as scalar-valued output or single residual block, the models fundamentally considered in the literature have no nonlinearities at the final residual representation that feeds into the final affine layer. To codify such a difference in nonlinearities and reveal a linear estimation property, we define ResNEsts, i.e., Residual Nonlinear Estimators, by simply dropping nonlinearities at the last residual representation from standard ResNets. We show that widemore »ResNEsts with bottleneck blocks can always guarantee a very desirable training property that standard ResNets aim to achieve, i.e., adding more blocks does not decrease performance given the same set of basis elements. To prove that, we first recognize ResNEsts are basis function models that are limited by a coupling problem in basis learning and linear prediction. Then, to decouple prediction weights from basis learning, we construct a special architecture termed augmented ResNEst (A-ResNEst) that always guarantees no worse performance with the addition of a block. As a result, such an A-ResNEst establishes empirical risk lower bounds for a ResNEst using corresponding bases. Our results demonstrate ResNEsts indeed have a problem of diminishing feature reuse; however, it can be avoided by sufficiently expanding or widening the input space, leading to the above-mentioned desirable property. Inspired by the densely connected networks (DenseNets) that have been shown to outperform ResNets, we also propose a corresponding new model called Densely connected Nonlinear Estimator (DenseNEst). We show that any DenseNEst can be represented as a wide ResNEst with bottleneck blocks. Unlike ResNEsts, DenseNEsts exhibit the desirable property without any special = architectural re-design.« less
    Free, publicly-accessible full text available December 1, 2022
  3. Free, publicly-accessible full text available October 11, 2022
  4. Structures such as rehearsals have been designed within mathematics education to engage teacher candidates in deliberate practice of specific teaching episodes before enacting within classroom settings. Current research has analyzed traditional rehearsals that involve peers acting as K-12 students as the teacher candidate facilitates an activity; however innovative technologies such as virtual simulation software — Mursion® (developed as TeachLivE™) — offer new opportunities to use student avatars in this context. This work explores the use of rehearsals within virtual simulations as compared to traditional rehearsals by using (nonpooled) two- sample, t-tests to compare changes in the control and comparison groupsmore »regarding their use of eliciting strategies. Similarity of the groups in how they develop eliciting strategies presents evidence that virtual simulations have the potential to provide comparable contexts for rehearsals. At the same time, the specific differences between groups prompts further examination of the contexts and patterns in discussion to better understand what is influencing differential change.« less
  5. The exponential growth of IoT end devices creates the necessity for cost-effective solutions to further increase the capacity of IEEE802.15.4g-based wireless sensor networks (WSNs). For this reason, the authors present a wireless sensor network concentrator (WSNC) that integrates multiple collocated collectors, each of them hosting an independent WSN on a unique frequency channel. A load balancing algorithm is implemented at the WSNC to uniformly distribute the number of aggregated sensor nodes across the available collectors. The WSNC is implemented using a BeagleBone board acting as the Network Concentrator (NC) whereas collectors and sensor nodes realizing the WSNs are built usingmore »the TI CC13X0 LaunchPads. The system is assessed using a testbed consisting of one NC with up to four collocated collectors and fifty sensor nodes. The performance evaluation is carried out under race conditions in the WSNs to emulate high dense networks with different network sizes and channel gaps. The experimental results show that the multicollector system with load balancing proportionally scales the capacity of the network, increases the packet delivery ratio, and reduces the energy consumption of the IoT end devices.« less
  6. Summary Envelopes have been proposed in recent years as a nascent methodology for sufficient dimension reduction and efficient parameter estimation in multivariate linear models. We extend the classical definition of envelopes in Cook et al. (2010) to incorporate a nonlinear conditional mean function and a heteroscedastic error. Given any two random vectors ${X}\in\mathbb{R}^{p}$ and ${Y}\in\mathbb{R}^{r}$, we propose two new model-free envelopes, called the martingale difference divergence envelope and the central mean envelope, and study their relationships to the standard envelope in the context of response reduction in multivariate linear models. The martingale difference divergence envelope effectively captures the nonlinearity inmore »the conditional mean without imposing any parametric structure or requiring any tuning in estimation. Heteroscedasticity, or nonconstant conditional covariance of ${Y}\mid{X}$, is further detected by the central mean envelope based on a slicing scheme for the data. We reveal the nested structure of different envelopes: (i) the central mean envelope contains the martingale difference divergence envelope, with equality when ${Y}\mid{X}$ has a constant conditional covariance; and (ii) the martingale difference divergence envelope contains the standard envelope, with equality when ${Y}\mid{X}$ has a linear conditional mean. We develop an estimation procedure that first obtains the martingale difference divergence envelope and then estimates the additional envelope components in the central mean envelope. We establish consistency in envelope estimation of the martingale difference divergence envelope and central mean envelope without stringent model assumptions. Simulations and real-data analysis demonstrate the advantages of the martingale difference divergence envelope and the central mean envelope over the standard envelope in dimension reduction.« less