skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Khardon, Roni"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Ensemble models (bagging and gradient-boosting) of relational decision trees have proved to be some of the most effective learning methods in the area of probabilistic logic models (PLMs). While effective, they lose one of the most important benefits of PLMs—interpretability. In this paper we consider the problem of compressing a large set of learned trees into a single explainable model. To this effect, we propose CoTE—Compression of Tree Ensembles—that produces a single small decision list as a compressed representation. CoTE first converts the trees to decision lists and then performs the combination and compression with the aid of the original training set. An experimental evaluation demonstrates the effectiveness of CoTE in several benchmark relational data sets. 
    more » « less
    Free, publicly-accessible full text available March 1, 2025
  2. The paper introduces DiSProD, an online planner developed forenvironments with probabilistic transitions in continuous state andaction spaces. DiSProD builds a symbolic graph that captures thedistribution of future trajectories, conditioned on a given policy,using independence assumptions and approximate propagation ofdistributions. The symbolic graph provides a differentiablerepresentation of the policy's value, enabling efficient gradient-basedoptimization for long-horizon search. The propagation of approximatedistributions can be seen as an aggregation of many trajectories, makingit well-suited for dealing with sparse rewards and stochasticenvironments. An extensive experimental evaluation compares DiSProD tostate-of-the-art planners in discrete-time planning and real-timecontrol of robotic systems. The proposed method improves over existingplanners in handling stochastic environments, sensitivity to searchdepth, sparsity of rewards, and large action spaces. Additionalreal-world experiments demonstrate that DiSProD can control groundvehicles and surface vessels to successfully navigate around obstacles.

     
    more » « less
  3. null (Ed.)
    The paper provides a thorough investigation of Direct Loss Minimization (DLM), which optimizes the posterior to minimize predictive loss, in sparse Gaussian processes. For the conjugate case, we consider DLM for log-loss and DLM for square loss showing a significant performance improvement in both cases. The application of DLM in non-conjugate cases is more complex because the logarithm of expectation in the log-loss DLM objective is often intractable and simple sampling leads to biased estimates of gradients. The paper makes two technical contributions to address this. First, a new method using product sampling is proposed, which gives unbiased estimates of gradients (uPS) for the objective function. Second, a theoretical analysis of biased Monte Carlo estimates (bMC) shows that stochastic gradient descent converges despite the biased gradients. Experiments demonstrate empirical success of DLM. A comparison of the sampling methods shows that, while uPS is potentially more sample-efficient, bMC provides a better tradeoff in terms of convergence time and computational efficiency. 
    more » « less
  4. null (Ed.)
  5. Bayesian models are established as one of the main successful paradigms for complex problems in machine learning. To handle intractable inference, research in this area has developed new approximation methods that are fast and effective. However, theoretical analysis of the performance of such approximations is not well developed. The paper furthers such analysis by providing bounds on the excess risk of variational inference algorithms and related regularized loss minimization algorithms for a large class of latent variable models with Gaussian latent variables. We strengthen previous results for variational algorithms by showing they are competitive with any point-estimate predictor. Unlike previous work, we also provide bounds on the risk of the \emph{Bayesian} predictor and not just the risk of the Gibbs predictor for the same approximate posterior. The bounds are applied in complex models including sparse Gaussian processes and correlated topic models. Theoretical results are complemented by identifying novel approximations to the Bayesian objective that attempt to minimize the risk directly. An empirical evaluation compares the variational and new algorithms shedding further light on their performance. 
    more » « less