In this article, we introduce a compact representation for measured BRDFs by leveraging Neural Processes (NPs). Unlike prior methods that express those BRDFs as discrete high-dimensional matrices or tensors, our technique considers measured BRDFs as continuous functions and works in corresponding function spaces . Specifically, provided the evaluations of a set of BRDFs, such as ones in MERL and EPFL datasets, our method learns a low-dimensional latent space as well as a few neural networks to encode and decode these measured BRDFs or new BRDFs into and from this space in a non-linear fashion. Leveraging this latent space and the flexibility offered by the NPs formulation, our encoded BRDFs are highly compact and offer a level of accuracy better than prior methods. We demonstrate the practical usefulness of our approach via two important applications, BRDF compression and editing. Additionally, we design two alternative post-trained decoders to, respectively, achieve better compression ratio for individual BRDFs and enable importance sampling of BRDFs.
more »
« less
Character motion in function space
We address the problem of animated character motion representation and approximation by introducing a novel form of motion expression in a function space. For a given set of motions, our method extracts a set of orthonormal basis (ONB) functions. Each motion is then expressed as a vector in the ONB space or approximated by a subset of the ONB functions. Inspired by the static PCA, our approach works with the time-varying functions. The set of ONB functions is extracted from the input motions by using functional principal component analysis and it has an optimal coverage of the input motions for the given input set. We show the applications of the novel compact representation by providing a motion distance metric, motion synthesis algorithm, and a motion level of detail. Not only we can represent a motion by using the ONB; a new motion can be synthesized by optimizing connectivity of reconstructed motion functions, or by interpolating motion vectors. The quality of the approximation of the reconstructed motion can be set by defining a number of ONB functions, and this property is also used to level of detail. Our representation provides compression of the motion. Although we need to store the generated ONB that are unique for each set of input motions, we show that the compression factor of our representation is higher than for commonly used analytic function methods. Moreover, our approach also provides lower distortion rate.
more »
« less
- PAR ID:
- 10167946
- Date Published:
- Journal Name:
- The Visual Computer
- ISSN:
- 0178-2789
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this article, we introduce an error representation function to perform adaptivity in time of the recently developed timemarching Discontinuous Petrov–Galerkin (DPG) scheme. We first provide an analytical expression for the error that is the Riesz representation of the residual. Then, we approximate the error by enriching the test space in such a way that it contains the optimal test functions. The local error contributions can be efficiently computed by adding a few equations to the time-marching scheme. We analyze the quality of such approximation by constructing a Fortin operator and providing an a posteriori error estimate. The time-marching scheme proposed in this article provides an optimal solution along with a set of efficient and reliable local error contributions to perform adaptivity. We validate our method for both parabolic and hyperbolic problems.more » « less
-
Unary computing is a relatively new method for implementing non-linear functions using few hardware resources compared to binary computing. In its original form, unary computing provides no trade-off between accuracy and hardware cost. In this work, we propose a novel self-similarity-based method to optimize the previous hybrid binary-unary method and provide it with the trade-off between accuracy and hardware cost by introducing controlled levels of approximation. Given a target maximum error, our method breaks a function into sub-functions and tries to find the minimum set of unique sub-functions that can derive all the other ones through trivial bit-wise transformations. We compare our method to previous works such as HBU (hybrid binary-unary) and FloPoCo-PPA (piece-wise polynomial approximation) on a number of non-linear functions including Log, Exp, Sigmoid, GELU, Sin, and Sqr, which are used in neural networks and image processing applications. Without any loss of accuracy, our method can improve the area-delay-product hardware cost of HBU on average by 7% at 8-bit, 20% at 10-bit, and 35% at 12-bit resolutions. Given the approximation of the least significant bit, our method reduces the hardware cost of HBU on average by 21% at 8-bit, 49% at 10-bit, and 60% at 12-bit resolutions, and using the same error budget as given to FloPoCo-PPA, it reduces the hardware cost of FloPoCo-PPA on average by 79% at 8-bit, 58% at 10-bit, and 9% at 12-bit resolutions. We finally show the benefits of our method by implementing a 10-bit homomorphic filter, which is used in image processing applications. Our method can implement the filter with no quality loss at lower hardware cost than what the previous approximate and exact methods can achieve.more » « less
-
Many dynamical systems described by nonlinear ODEs are unstable. Their associated solutions do not converge towards an equilibrium point, but rather converge towards some invariant subset of the state space called an attractor set. For a given ODE, in general, the existence, shape and structure of the attractor sets of the ODE are unknown. Fortunately, the sublevel sets of Lyapunov functions can provide bounds on the attractor sets of ODEs. In this paper we propose a new Lyapunov characterization of attractor sets that is well suited to the problem of finding the minimal attractor set. We show our Lyapunov characterization is non-conservative even when restricted to Sum-of-Squares (SOS) Lyapunov functions. Given these results, we propose a SOS programming problem based on determinant maximization that yields an SOS Lyapunov function whose \begin{document}$ 1 $$\end{document}$-sublevel set has minimal volume, is an attractor set itself, and provides an optimal outer approximation of the minimal attractor set of the ODE. Several numerical examples are presented including the Lorenz attractor and Van-der-Pol oscillator.more » « less
-
Many applications in robotics require computing a robot manipulator's "proximity" to a collision state in a given configuration. This collision proximity is commonly framed as a summation over closest Euclidean distances between many pairs of rigid shapes in a scene. Computing many such pairwise distances is inefficient, while more efficient approximations of this procedure, such as through supervised learning, lack accuracy and robustness. In this work, we present an approach for computing a collision proximity function for robot manipulators that formalizes the trade-off between efficiency and accuracy and provides an algorithm that gives control over it. Our algorithm, called Proxima, works in one of two ways: (1) given a time budget as input, the algorithm returns an as-accurate-as-possible proximity approximation value in this time; or (2) given an accuracy budget, the algorithm returns an as-fast-as-possible proximity approximation value that is within the given accuracy bounds. We show the robustness of our approach through analytical investigation and simulation experiments on a wide set of robot models ranging from 6 to 132 degrees of freedom. We demonstrate that controlling the trade-off between efficiency and accuracy in proximity computations via our approach can enable safe and accurate real-time robot motion-optimization even on high-dimensional robot models.more » « less
An official website of the United States government

