<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Progressive embedding</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>07/12/2019</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10132345</idno>
					<idno type="doi">10.1145/3306346.3323012</idno>
					<title level='j'>ACM Transactions on Graphics</title>
<idno>0730-0301</idno>
<biblScope unit="volume">38</biblScope>
<biblScope unit="issue">4</biblScope>					

					<author>Hanxiao Shen</author><author>Zhongshi Jiang</author><author>Denis Zorin</author><author>Daniele Panozzo</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[Tutte embedding is one of the most common building blocks in geometry processing algorithms due to its simplicity and provable guarantees. Although provably correct in ininite precision arithmetic, it fails in challenging cases when implemented using loating point arithmetic, largely due to the induced exponential area changes.We propose Progressive Embedding, with similar theoretical guarantees to Tutte embedding, but more resilient to the rounding error of loating point arithmetic. Inspired by progressive meshes, we collapse edges on an invalid embedding to a valid, simpliied mesh, then insert points back while maintaining validity. We demonstrate the robustness of our method by computing embeddings for a large collection of disk topology meshes.By combining our robust embedding with a variant of the matchmaker algorithm, we propose a general algorithm for the problem of mapping multiply connected domains with arbitrary hard constraints to the plane, with applications in texture mapping and remeshing.CCS Concepts: • Computing methodologies → Shape modeling.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Piecewise linear surface-to-plane maps, or parametrizations, are ubiquitous in computer graphics, geometry processing, mechanical engineering, and scientiic visualization. Depending on the applications, the maps are required to exhibit diferent properties, most commonly, low distortion, local injectivity, and global bijectivity.</p><p>The last two properties are challenging to guarantee for discrete maps. Most algorithms with guarantees use Tutte embedding as a component. Tutte embedding is a construction that is guaranteed to create bijective mappings under minimal assumptions, if both domains are simply connected and the target planar domain is convex. However, the guarantee only holds if the computation is performed in arbitrary precision rather than loating point arithmetic, as it is commonly done. Failure due to loating point approximation is not as uncommon as one would assume, as the algorithm is likely to create an extreme variation of scale and aspect ratios in complex mapping cases. To quantitatively evaluate this issue, we computed Tutte embeddings on 2718 models (all the genus 0 models from Thingi10k <ref type="bibr">[Zhou and Jacobson 2016]</ref>) using double precision, and observed 80 failures. To the best of our knowledge, this problem has not been addressed before in the literature.</p><p>This rate of failure is problematic for batch processing large geometrical collections (for example for processing geometric deep learning datasets) or when the embedding has to be computed many times (for example in cross-parametrization <ref type="bibr">[Kraevoy et al. 2003;</ref><ref type="bibr">Schreiner et al. 2004]</ref>). In these scenarios, a failure rate of 2.9% may not be tolerable, since it is not realistic to manually ix hundreds of problematic cases, and if failure happens on large meshes with millions of triangles it might not even be possible to ix them by hand.</p><p>A simple solution to this problem is the use of multi-precision (or rational) arithmetic <ref type="bibr">[Granlund 2018</ref>]: if enough bits are used to represent the mantissa and exponent of the loating point representation, Tutte embedding will succeed, since the solution of a linear system can be computed exactly. However, the result in high precision is not directly usable by downstream applications, and requires to be rounded (or &#322;snapped&#382; <ref type="bibr">[Halperin and Packer 2002]</ref>) to loating point coordinates. This is a surprisingly challenging problem for which, to the best of our knowledge, no solution applicable to our setting exists (Section 3.1).</p><p>Instead, we propose a progressive algorithm to directly generate an embedding using loating point coordinates. We start from an initial, possibly invalid, loating point planar parametrization, and 32:2 &#8226; Hanxiao Shen, Zhongshi Jiang, Denis Zorin, and Daniele Panozzo we make it valid by collapsing all lipped and degenerate parts of the (possibly invalid) embedding produced, e.g., by a loatingpoint Tutte algorithm. We re-insert one vertex of the original mesh at a time, preserving the validity of the map at every step. This approach is inspired by <ref type="bibr">[Schreiner et al. 2004]</ref>, which proposes a progressive algorithm for computing cross-parametrizations based on progressive meshes <ref type="bibr">[Hoppe 1996</ref>]. Our algorithm difers since we do not know a valid position for the inserted vertices, and we thus have to compute it as the vertices are added back. We provide a formal proof of correctness of our method in arbitrary precision (obtaining the same formal guarantees as Tutte embedding), and we practically demonstrate its superior robustness by parametrizing a large collection of 10k models.</p><p>Using our new embedding method and the matchmaker algorithm <ref type="bibr">[Kraevoy and Shefer 2004;</ref><ref type="bibr">Kraevoy et al. 2003</ref>] as a foundation, we develop an algorithm for mapping between multiply-connected domains with arbitrary constraints, supporting fully general selfoverlapping domains as the target. We experimentally show that our algorithm is very robust, producing valid and distortion-optimized maps even for challenging cases where the original matchmaker algorithm fails due to numerical problems. We demonstrate the practical utility of our algorithm for UV mapping and quadrangulation applications.</p><p>To foster replicability of results and to maximize the practical impact of our algorithm, we also attach a reference implementation. <ref type="url">https://github.com/hankstag/progressive_embedding</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK 2.1 Planar Embedding of Graphs and Meshes</head><p>Fary's theorem <ref type="bibr">[F&#225;ry 1948]</ref> states that any planar graph can be embedded in the plane with straight edges. Tutte <ref type="bibr">[Tutte 1963</ref>] extends this result to the case of ixed convex boundary with a spring analogue, and <ref type="bibr">[Floater 1997</ref>] established its connection to the parameterization methods in the geometry processing community, and extend Tutte's uniform weight to arbitrary positive ones. In both cases, the problem is reduced to solving a linear system of equations and the resulting embeddings' minimum area might even be negative exponential with respect to the number of vertices. There has been active efort in the graph drawing community to address these issues, by bounding the total area when drawing on integer grids, or equivalently, controlling the minimum resolution <ref type="bibr">[Chambers et al. 2011</ref>] under ixed diameter. Most notably, <ref type="bibr">[Schnyder 1990</ref>] shows an algorithm to embed a planar graph onto integer grids inside a triangle region, and <ref type="bibr">[Chambers et al. 2011</ref>] proves an upper polynomial bound on the area while keeping a speciied convex boundary shape: the proof is constructive and may (potentially) be used as a basis for a practical algorithm. In all cases, rounding problems will afect these algorithms as the size of the graph grows (Section 3.1).</p><p>Orbifold Tutte Embedding. Multiple extensions of Tutte's theorem to map surfaces to diferent co-domains have been proposed. In particular, the theorem has been extended to map surfaces to a Euclidean orbifold <ref type="bibr">[Aigerman and Lipman 2015]</ref>, to a hyperbolic orbifold <ref type="bibr">[Aigerman and Lipman 2016]</ref>, and to a spherical orbifold <ref type="bibr">[Aigerman et al. 2017]</ref>. All three methods support hard positional constraints and ensure the generation of a bijective map between the surface and the orbifold in ininite precision arithmetic. These methods also sufer from similar numerical issue as Tutte's, and extending our algorithm to orbifold embeddings is an interesting direction for future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Progressive Meshes</head><p>The well-known progressive meshes algorithm <ref type="bibr">[Hoppe 1996;</ref><ref type="bibr">Sander et al. 2001]</ref> shows how a triangle mesh can be simpliied by collapsing one edge at a time, and reconstructed applying the inverse topological operations in the inverse order. This scheme has been introduced as an eicient way to store, transmit, and render large meshes, where the per-vertex properties of the removed vertex are stored together with the information required to insert them back. This work has been later applied to compute inter-surface mappings <ref type="bibr">[Schreiner et al. 2004]</ref>, by jointly simplifying two meshes into a common base mesh, then starting from optimizing their isometric distortion while reinserting the vertices in the base mesh.</p><p>We use the same idea to eliminate problematic regions of an existing embedding (either lipped, or with a high distortion), and then reinserting one vertex at a time, while preserving the quality of the triangulation. Diferently from progressive meshes, in our case we do not have geometrical information available that could help us decide where the vertex should be inserted to obtain a valid embedding.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Distortion-Minimizing Mappings</head><p>In this section, we focus on the recent works closely related to generating distortion-minimizing discrete locally injective and globally bijective discrete maps, and we refer to <ref type="bibr">[Floater and Hormann 2005;</ref><ref type="bibr">Hormann et al. 2007;</ref><ref type="bibr">Shefer et al. 2006</ref>] for a comprehensive treatment of earlier parametrization methods without these properties.</p><p>A discrete locally injective map requires that triangles maintain their orientation (i.e. they do not lip) and if the sum of (unsigned) triangle angles around each internal vertex is precisely 2&#960; <ref type="bibr">[Weber and Zorin 2014]</ref>. Three main families of methods have been proposed to deal with this challenging constraint: barrier, convexiication, and hybrid algorithms.</p><p>Barrier Algorithms. Barrier algorithms require a valid initial solution, and then optimize its quality without leaving the feasible space. The key idea is to adopt quality metrics diverging to ininity when triangles become degenerate, thus inhibiting lips. Popular choices strive to preserve angles <ref type="bibr">[Degener et al. 2003;</ref><ref type="bibr">Hormann and Greiner 2000]</ref> or lengths <ref type="bibr">[Aigerman et al. 2014;</ref><ref type="bibr">Poranne and Lipman 2014;</ref><ref type="bibr">Sander et al. 2001;</ref><ref type="bibr">Smith and Schaefer 2015;</ref><ref type="bibr">Sorkine et al. 2002]</ref>. Alternatively, a barrier functions can be added to existing energies to enforce local injectivity <ref type="bibr">[Sch&#252;ller et al. 2013</ref>]. These non-linear energies are diicult to minimize, stemming a series of methods speciically targeting this problem. They include coordinate descent <ref type="bibr">[Hormann and Greiner 2000;</ref><ref type="bibr">Labsik et al. 2000]</ref>, parallel gradient descent <ref type="bibr">[Fu et al. 2015</ref>], Anderson Acceleration <ref type="bibr">[Peng et al. 2018]</ref>, as well as other quasi-newton approaches <ref type="bibr">[Claici et al. 2017;</ref><ref type="bibr">Kovalsky et al. 2016;</ref><ref type="bibr">Liu et al. 2018;</ref><ref type="bibr">Rabinovich et al. 2017;</ref><ref type="bibr">Shtengel et al. 2017;</ref><ref type="bibr">Smith and Schaefer 2015;</ref><ref type="bibr">Zhu et al. 2018]</ref>.</p><p>All these methods support hard-constraints if they are already satisied in the initial map, which is the key idea used in MatchMaker</p><p>&#8226; 32:3 <ref type="bibr">[Kraevoy et al. 2003</ref>]. Our progressive embedding can be used to robustly generate the initial map, that can then be improved by any of the previous techniques (Section 5).</p><p>Projection Algorithms. An essential component of these methods is a convexiied form of the injectivity constraints <ref type="bibr">[Kovalsky et al. 2015;</ref><ref type="bibr">Lipman 2012]</ref>. While these methods naturally support hard injectivity constraints, they might fail to ind a feasible solution, with no output generated. The only known way to guarantee that a feasible solution exists is to formulate the convexiied constraints using a reference frame derived from a valid (although potentially very high distortion) solution.</p><p>Hybrid Algorithms. Hybrid algorithms are an interesting mix between these two approaches <ref type="bibr">[Fu and Liu 2016;</ref><ref type="bibr">Poranne et al. 2017]</ref>. The initial guess is produced by separating all triangles and isometrically rotating them into the UV space. A barrier method is then used to prevent them from lipping, while trying to seal the seams. This approach might fail to seal all the seams, not producing a valid map.</p><p>Globally Bijective Maps. For simply connected domains, bijective maps are locally injective maps whose boundary does not intersect. All embeddings described in Section 2.1 satisfy this property. These methods have been extended to non-convex, self-overlapping polygons [Weber and Zorin 2014] and polyhedrons <ref type="bibr">[Campen et al. 2016</ref>], but they still require a ixed boundary. Few methods can produce bijective maps while letting the boundary free, relying on either collision detection <ref type="bibr">[Smith and Schaefer 2015]</ref> or scafolding elements <ref type="bibr">[Gotsman and Surazhsky 2001;</ref><ref type="bibr">Jiang et al. 2017;</ref><ref type="bibr">M&#252;ller et al. 2015;</ref><ref type="bibr">Zhang et al. 2005]</ref>. All free boundary methods require a starting point: our algorithm can be used to generate it, enabling these algorithms to create bijective maps with hard constraints (Section 4).</p><p>Hard Positional Constraints and Reinement. Matchmaker <ref type="bibr">[Kraevoy et al. 2003</ref>] introduced hard positional constraints for texture mapping applications. The algorithm uses a two-step approach, irst generating a valid map, and then optimizing its geometrical quality. The method is one of the few using reinement to guarantee the existence of feasible solutions. The method has been extended by adding an intermediate warping stage to align the constraints in <ref type="bibr">[Lee et al. 2008</ref>]. We show in section 4 how our embedding can be used withtin Matchmaker to increase its robustness, and we also show how to extend Matchmaker to support self-overlapping polygonal target domains.</p><p>Cross-Parametrization. Cross-parametrization, i.e. the computation of a map between two surfaces, is another problem that often relies on planar embeddings. <ref type="bibr">[Schreiner et al. 2004</ref>] and <ref type="bibr">[Kraevoy and Shefer 2004]</ref> proposed the irst provably guaranteed solutions to compute maps between surfaces, by reducing the problem to mapping both surfaces to a common subdomain by either using Tutte's embedding or a simpliication approach. A similar construction that cuts open the surface into a single topological disc has been proposed in <ref type="bibr">[Aigerman et al. 2014]</ref>, and extended to allow even the optimization of the seams positions in <ref type="bibr">[Aigerman et al. 2015</ref>]. Floating point rounding errors have not been considered in any of these works, which are more prone to fail as the resolution of the mesh increase or whenever the user-provided constraints introduce a high distortion (Section 5).</p><p>Global Parametrization. Field-aligned parametrization methods <ref type="bibr">[Bommes et al. 2009</ref>] strive to compute a locally injective map <ref type="bibr">[Bommes et al. 2013]</ref> whose gradient is aligned with a user-provided directional ield. We refer an interested reader to <ref type="bibr">[Bommes et al. 2012]</ref> for a comprehensive overview of these techniques. Our embedding algorithm can be used to compute parametrizations to a target self-overlapping polygon, enabling to robustly generate these parametrization if a valid boundary polygon is provided (Section 5).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">PROGRESSIVE EMBEDDING 3.1 Analysis of Tute Embedding in Floating Points</head><p>We discuss in detail when Tutte embedding implemented in loating point may fail, and also show that straightforward solutions with of-the-shelves geometry processing tools do not solve these issues.</p><p>Tutte Embedding implemented in Floating Points. We use the implementation of Tutte embedding in libigl <ref type="bibr">[Jacobson et al. 2016]</ref>, and apply it to all the 2718 genus 0 models of the Thingi10k dataset <ref type="bibr">[Zhou and Jacobson 2016]</ref>, after cleaning them up and improving their quality using TetWild <ref type="bibr">[Hu et al. 2018]</ref>, to ensure that no degenerate triangles are present. We also ensure that the meshes are 3-connected by reining them locally. For every model, we randomly pick and delete a triangle, and map the resulting boundary to an equilateral triangle. We compute the Tutte embedding, and check for lips using CGAL's exact loating point predicates <ref type="bibr">[Br&#246;nnimann et al. 2018</ref>]. The check fails for 80 models, due to the numerical errors introduced in the mapping. In retrospect, this is not surprising since it is well-known that Tutte planar drawing may admit exponential area when drawing on integer grids. Two problematic cases are shown in Figure <ref type="figure">2</ref>, where the embedding introduces a large variation of scale, and the lip occurs on triangles with small areas.</p><p>Multi-Precision Tutte Embedding with Snap Rounding. A straightforward way to address this problem is to increase the number of bits used in the loating point representation. We double the number of bits using the library MPFR <ref type="bibr">[Fousse et al. 2007]</ref>, which is directly integrated into Eigen, and can thus be used with the Tutte embedding in libigl with minimal code changes. With this setup, all the problematic cases are solved. However, the runtime is increased by around one order of magnitude, and, most importantly, the results generated cannot be rounded back to loating point since trivial rounding introduces lips. Snap rounding <ref type="bibr">[Packer 2018</ref>] could be used to avoid them, but it will collapse possibly large regions of the mesh: 6.3% of the vertices of the model shown in the bottom left of Figure <ref type="figure">8</ref> are collapsed when using a snap rounding resolution of 10 -16 times the diagonal of the bounding box of the embedding.</p><p>Multi-Precision Tutte Embedding with Quality Optimization. The problem with rounding to loats is induced by the small triangles (and correspondingly small edges) which leads to lips after snapping. A possible way to address this issue is to use a mesh optimization algorithm, using multi-precision representation, before rounding to loats. We tested two approaches: (1) SLIM <ref type="bibr">[Rabinovich et al. 2017]</ref> adapted to run in multiprecision, and (2) minimizing the symmetric Dirichlet energy by moving one vertex at a time using coordinate descent <ref type="bibr">[Hormann and Greiner 2000]</ref>. The irst approach is prohibitively slow, due to the linear solve in high precision and the very small steps due to the elements with almost zero area. The second one succeeds on 57 models, but still fails on 23, even after 24 hours of running time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Progressive Embedding</head><p>Our approach draws on the ideas of progressive meshes <ref type="bibr">[Hoppe 1996</ref>] and inter-surface mappings <ref type="bibr">[Schreiner et al. 2004</ref>], which are, in turn, closely related to theoretical ideas from PL topology (e.g., <ref type="bibr">[Hudson and Shaneson 1969]</ref>).</p><p>Our algorithm does not require Tutte embedding and can be used to construct an embedding from scratch or from a random initial mapping (Figure <ref type="figure">3</ref>). It can be accelerated by using an existing, possibly invalid, embedding as a starting point. A triangle is invalid if its signed area is negative, or if its quality measure is below a threshold (we use the symmetric Dirichlet energy <ref type="bibr">[Smith and Schaefer 2015]</ref> with respect to a canonical equilateral triangle whose area is the area of the target boundary polygon divided by the number of triangles and mark invalid if it is above &#964; = 1e20). Using a quality measure in addition to signed area is important, since triangles with small, positive areas might cause numerical problems during the vertex insertion (Phase 2 below).</p><p>Starting from an invalid embedding, our algorithm (1) performs edge collapses until the simpliied mesh has no invalid triangles (Algorithm 1), and (2) progressively inserts back each vertex in the same order (Algorithm 2), with feasibility of insertion ensured at each step.</p><p>Stage 1: Simpliication. At this stage, we iteratively ind an interior edge that can be collapsed until all invalid elements are removed from the initial embedding, or a single interior vertex is left (Algorithm 1). A theorem in <ref type="bibr">[Mijatovi&#263; 2003</ref>] and our Theorem A.8 guarantees that a sequence of collapses reducing the mesh to a mesh Fig. <ref type="figure">5</ref>. The two admissible insertion positions from Lemma A.11. The dark region on the let shows the valid positions for v m while fixing v 0 . The right case is the opposite. Our algorithm opts for the let case for stability, since the calculation of the valid sector in the right case involves intersection of the prolonged edge (dashed lines) and the 1-ring neighbors. We pick the valid sector as the one that has an inner angle sum smaller than &#960; . with a single interior vertex can always be found; as the boundary embedding is convex, we can also always ind a position for this vertex to create a valid embedding for the fully simpliied mesh. The simpliication algorithm starts by tagging all invalid triangles (Line 1), and attempts to collapse all their edges. If this procedure is successful in eliminating all invalid triangles, the algorithm terminates, otherwise all the triangles adjacent to tagged triangles are tagged (Line 12), and their edges collapsed. Note that we only allow edge collapses on internal edges to avoid changes to the boundary. This algorithm is guaranteed to terminate, since in the worst case it will tag the entire mesh and Theorem A.8 ensures that at least one edge will be collapsible. If only one internal vertex is left (Line 4), we move it to the barycenter of the boundary vertices (which is inside the convex boundary by construction, but might fail in degenerate cases, as discussed in Section 6). Once the algorithm terminates, the resulting simpliied mesh has no inverted triangles and by Proposition A.9, also has sums of triangle angles at each vertex equal to 2&#960; .</p><p>Stage 2: Insertion. Starting from the valid embedding computed after Stage 1, we perform a sequence of splits reverting the collapses, while maintaining embedding validity at every step (Algorithm 2). Lemma A.11 ensures that this is always possible (in ininite precision), as the area of each triangle after insertion is always positive, whenever the newly inserted points lie in the valid sector (Figure <ref type="figure">5</ref>). The algorithm irst computes candidate directions in the valid sector, then performs a lip-avoiding line search <ref type="bibr">[Smith and Schaefer 2015]</ref> (Line 3) to ind candidate positions P(v m ) for the newly inserted vertex v m , such that the 1-ring neighborhoods are valid. In our experiments, we use step length &#945; = 0.8, and cap the number of line search iterations to 75. If at least one candidate is found, the split is performed using the candidate resulting in a mesh minimizing the error measured as the maximum of the 1-ring energy. Since a candidate position always exist in ininite precision (Lemma A.11), the only possible cause for not inding it is a lack of representation power in the loating point representation. We thus improve the quality of the mesh (Line 11) until a candidate is found.</p><p>This algorithm may still fail to ind a candidate in degenerate conigurations. However, we experimentally found that the constrained mesh smoothing is very efective at ameliorating this issue, keeping the mesh quality suiciently high during the insertion to allow split operations to succeed (Figure <ref type="figure">6</ref>). In all our experiments we only found one failure case, where the prescribed target boundary is a numerically degenerate triangle (Section 6). All our other experiments, even on a large data set and with complex boundary conditions (Section 5) were successful.</p><p>Local Smoothing. To improve the quality of the map in the insertion step (Algorithm 2, line 8 and 11), we minimize the symmetric Dirichlet energy <ref type="bibr">[Smith and Schaefer 2015]</ref>, optimizing one vertex position at a time using Newton iterations, similarly to <ref type="bibr">[Fu et al. 2015;</ref><ref type="bibr">Hormann and Greiner 2000;</ref><ref type="bibr">Labsik et al. 2000</ref>]. We favor this local approach since it is more robust to low quality elements, which would otherwise badly afect both the numerical stability and the step size of global optimization methods. Since our goal is to improve the minimal quality of the mesh, we minimize the symmetric Dirichlet energy only for the invalid triangles (using reference shape as an equilateral triangle with area equal to the average of triangles in the 2D domain), and use the scafold energy <ref type="bibr">[Jiang et al. 2017</ref>] (i.e. we use the element itself as the reference triangle for the symmetric Dirichlet energy) for the valid ones, which allows them to move more freely. In our experiments, the local smoothing is performed for 10 iterations after every insertion step. If a valid insertion candidate cannot be found, we keep improving the quality with batches of 50 smoothing iterations until a candidate is found (Algorithm 2 line 11). Furthermore, at each smoothing phase, we perform a greedy coloring of the edge graph <ref type="bibr">[Kucera 1991</ref>], and the vertices inside each color are optimized in parallel.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">MATCHMAKER++</head><p>The computation of locally injective maps is important in geometry processing (Section 2), with the majority of the methods focusing on eicient and scalable quality optimization. However, few method guarantees positional constraints: the notable MatchMaker algorithm <ref type="bibr">[Kraevoy et al. 2003</ref>] reduces the problem to a number of convex planar embeddings, which are computed with Tutte's algorithm. However, as we observe in some cases (Section 5.2), such embeddings can be numerically challenging. Replacing Tutte embedding with progressive embedding enables matchmaker to robustly compute maps with very challenging conigurations of constraints. In this Section, we describe an extension of <ref type="bibr">[Kraevoy et al. 2003</ref>] that (1) makes use of progressive embedding to increase robustness, and (2) supports weakly self-overlapping polygons as co-domains [Weber and Zorin 2014].</p><p>Overview. Combining our progressive embedding algorithm and the matchmaker algorithm, we describe an algorithm for solving the following problem: Given a simply-connected 3d mesh, equipped with with a set of user-deined hard positional constraints at vertices, compute a valid piecewise-linear parametrization, such that (1) the map is valid in the following sense: there are no lipped triangles, and for each vertex, the map restricted to the one ring of triangle of that vertex is bijective, unless it is a singular boundary vertex, as deined below and (2) the parametrization bitwise exactly satisies the user-deined positional constraints. We tackle this in three steps: we decompose the target domain into convex polygonal subdomains, match these domains to the subdomains of the source domain, compute an initial bijective map by stitching progressive embeddings for each subdomain, and inal globally optimize the mapping distortion.</p><p>User input. We distinguish between two cases, chosen by the user (1) the required map is a global embedding, (2) the map is an immersion. For the irst case, the constraint speciication is more lexible: the user only has to provide a set of point or line constraints. For the second case, the target domain is not a subset of the plane, but rather, an everywhere lat surface with overlaps. We require the user to prescribe constraints for the whole boundary of the polygon to deine the target domain unambiguously (some parts may be marked as movable, but an initial position is needed) and to provide a path connecting each point or line constraint to the boundary, which allows to deine its location on the target surface implied by the boundary speciication. In this case, target boundary polygon has to be weakly self-overlapping [Weber and Zorin 2014], otherwise, the map does not exist.</p><p>Phase 1: Subdivision of the Target Domain. In the global embedding case, the target domain is generated by triangulating the bounding box of the input with <ref type="bibr">Triangle [Shewchuk 1996</ref>]. In the second case, the self-overlapping domain is triangulated using a modiication of the Shor-Van Wyck algorithm [Shor and Van Wyk 1992], described in [Weber and <ref type="bibr">Zorin 2014]</ref>. In both cases we ensure that the hard positional constraints are vertices of the triangulation. We then merge triangles into convex polygons in a greedy manner, by dropping edges if the resulting subdomains are convex. While this merging step is optional (the algorithm works also without it) it reduces the number of subdomains and vertices, making the next steps more eicient. In the case when an immersion is computed, by construction, it will be an embedding on subdomains computed starting from the Shor-Van Wyck triangulation.</p><p>Phase 2: Path Tracing on the Original Mesh. After the target domain is subdivided into convex polygons, we match this decomposition to the input mesh. The goal is to ind non-intersecting paths Fig. <ref type="figure">7</ref>. Starting from a triangulation generated from only boundary segments and internal constraint points (let). Instead of treating triangles as sub-domains as in <ref type="bibr">[Kraevoy et al. 2003</ref>], we merge triangles to convex polygons (middle). Then we find paths (bold, right) connecting constraint points to the boundary without new cycles, and prioritize their tracing.</p><p>connecting each of these pairs in 3D mesh, subdividing the whole mesh into same number of patches.</p><p>At the tracing stage, we perform a reordering of the paths to make sure that no previously traced path will block the future ones (Figure <ref type="figure">7</ref>). In the case of embedding, the algorithm inds paths that connect all positional constraints to the boundary without creating additional loops (than the existing boundary). In practice, these paths are found by dropping one segment on the boundary, and then grow the minimum spanning tree (over the edges of the polygonal mesh) from the incomplete boundary loop. We irst trace the paths on the minimum spanning tree and then the remaining ones, connecting the boundary to the constraints. The correctness of this procedure can be found in <ref type="bibr">[Praun et al. 2001]</ref>.</p><p>For the tracing of each path on the surface, we follow <ref type="bibr">[Kraevoy et al. 2003</ref>] to ind the shortest path connecting two endpoints, and add Steiner points on the edges if no path, not intersecting other paths, can be found.</p><p>Phase 3: Bijective Mapping. After establishing a correspondence between each patch of the 3D mesh and a convex polygon in the target domain, we can irst subdivide the 2D paths in the target domain to match the number of vertices on the corresponding 3D path to obtain the one-to-one correspondence between them. We observe that up to this point, the algorithm is largely combinatorial (while some vertices are inserted on edges, their geometric position is trivially determined and is very unlikely to result in numerical problems; none were observed in our experiments). At this point, the map is deined for boundaries of the subdomains corresponding to the convex subdomains in the target. Next, we extend the map to the interior of each region using our progressive embedding algorithm (Section 3.2). Notice that when the mesh patch is not 3-connected, we need to split the edges with two endpoints on the boundary.</p><p>Phase 4: Quality Optimization. The map obtained in the previous steps is valid, according to the deinition of the weakly selfoverlapping map <ref type="bibr">[Weber and Zorin 2014]</ref>. Therefore, its quality can be optimized using any locally injective map improvement algorithm (Section 2). We opt for <ref type="bibr">[Rabinovich et al. 2017</ref>], since it is eicient for large models and the implementation is readily available <ref type="bibr">[Jacobson et al. 2016</ref>]. The implementation is modiied to support hard positional constraints, by eliminating the corresponding variables.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">RESULTS AND DISCUSSION</head><p>We implemented our algorithm in C++, using Eigen <ref type="bibr">[Guennebaud et al. 2010]</ref> for linear algebra, and libigl <ref type="bibr">[Jacobson et al. 2016]</ref>   <ref type="table">1</ref>. Statistics of the input and output meshes in the planar embedding test (Section 5.1). From let to right: Name of the dataset, number of vertices, number of faces, number of invalid elements (positive area, but with energy above 1e20) ater Tute embedding, number of flipped elements ater Tute embedding, progressive embedding (Section 3) running time in seconds.  geometry processing and visualization. The reference source code, the data used, and the scripts to reproduce the results are attached in the additional material. The timings and statistics for the datasets shown in the paper are summarized in Table <ref type="table">1</ref> and<ref type="table">Table 2</ref>. We irst present results computed using only our progressive embedding (Section 5.1), and then demonstrate the generation of low distortion, locally bijective maps created with our extension of MatchMaker (Section 5.2).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Name</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Progressive Embedding</head><p>Planar Embedding for the Thingi10k Dataset <ref type="bibr">[Zhou and Jacobson 2016]</ref>. By computing Tutte's embedding for the genus-zero models in 2718 surface mesh models on a triangle boundary, we observed there are 80 cases where the generated parametrization has lipped elements due to loating point rounding errors. Using our progressive strategy, we are able to ix all failed cases. A selection of the parametrization results are shown in Figure <ref type="figure">2</ref>.</p><p>Integration with OptCuts <ref type="bibr">[Li et al. 2018]</ref>. OptCuts is a joint optimization method to create UV seam from a 3D model, balancing seam length and parameterization quality. It relies on a valid initialization, which for genus 0 model, is compute through randomly cutting two adjacent edges as seams, then latten it on the plane with Tutte embedding. In Figure <ref type="figure">8</ref>, we show two examples where this initialization fail. Both models can be processed if progressive Fig. <ref type="figure">8</ref>. Three UV maps generated by OptCuts <ref type="bibr">[Li et al. 2018</ref>] using an initial embedding created by our algorithm. OptCuts fails to process both models if Tute embedding is used instead.</p><p>embedding is used instead of Tutte embedding, allowing OptCuts to proceed and optimize the UV map.</p><p>Mapping an Hele-Shaw Polygon to a Square. Hele-Shaw low is a two-dimensional Stokes low of mixing liquids between two parallel lat surfaces separated by a small gap. In Figure <ref type="figure">1</ref>, we show an example mesh generated using the Hele-Shaw simulation proposed in <ref type="bibr">[Segall et al. 2016</ref>]. One way to compute a bijective map of the interior of the polygon between diferent frames is a crossparameterization using a square as the common domain, with no internal constraints. Tutte embedding fails in this case, introducing 46 lipped faces (Figure <ref type="figure">1</ref>, left), while progressive embedding produces a valid map with lower distortion (Figure <ref type="figure">1</ref>, right). <ref type="bibr">[Kraevoy et al. 2003]</ref> Self-Overlapping Locally-Injective Maps. By introducing Shor Van Wyck algorithm into the matchmaker pipeline, we are able to mapping a surface mesh with disk topology to self-overlaping boundaries as in <ref type="bibr">[Weber and Zorin 2014]</ref>. Similarly to [Weber and Zorin 2014] our algorithm can generate locally-injective, self-overlapping parametrizations (Figure <ref type="figure">9</ref>), which are commonly used by quadrangulation algorithms <ref type="bibr">[Bommes et al. 2012</ref>].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Matchmaker++</head><p>Comparison with <ref type="bibr">[Kovalsky et al. 2015</ref>]. We parametrized the global parametrization benchmark introduced in <ref type="bibr">[Myles et al. 2014</ref>], using the seams in the obj iles, and ixing in random positions 3 random points of each mesh. This is a challenging task, since the random constraints introduce a large distortion. Our method succeeded on all 102 models: a selection of the most challenging ones is shown in Figure <ref type="figure">10</ref>. We also run the same experiment using the most recent projection method <ref type="bibr">[Kovalsky et al. 2015</ref>] (which is one of the few methods that supports similar constraints without requiring a fully speciied target domain), using LSCM <ref type="bibr">[L&#233;vy et al. 2002]</ref> as an initial guess. The method failed on 28 models over 102  (27%). We show three failed cases using their method with lipped elements in the output, and the quality is considerably lower than our approach, as shown in Figure <ref type="figure">11</ref>. Note that this is a comparison that favours our method, since we are allowed to remesh the map, while <ref type="bibr">[Kovalsky et al. 2015]</ref> preserves the original connectivity.</p><p>Stress Test. To further evaluate the robustness and applicability of our algorithm, we performed an additional stress test, by parametrizing the 102 models of <ref type="bibr">[Myles et al. 2014</ref>] into a planar space illing curve, and adding 3 random positional constraints. These experiments push the algorithm to the limit: MatchMaker fails on 5 if Tutte embedding is used, while it succeeds in all cases, producing bijective maps exactly satisfying the hard positional constraints, with progressive embedding (Figure <ref type="figure">12</ref>).  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">LIMITATIONS AND CONCLUDING REMARKS.</head><p>We introduced a robust algorithm to compute planar embeddings, and demonstrated its practical utility in common geometry processing tasks. Our algorithm is provably correct in ininite precision and is designed to work robustly with loating point coordinates: unfortunately we cannot guarantee that an output is produced in the latter case since a solution of the local point placement problem might not exist. Consider the example in Figure <ref type="figure">13</ref>: the bounding box of the triangle has short sides (the diference between the loating point coordinate representation is only in the least signiicant bit of the mantissa). Assume that our algorithm needs to split of a vertex from the vertex with numerically lat angle A, placing the resulting point in the interior. In this situation, our algorithm fails, since the average of the coordinates (in loating point) of the boundary triangle does not lie inside the triangle due to numerical rounding.</p><p>Except for this extreme case, we have not observed any other failure cases for our algorithm, which produced robustly thousands of embeddings, and, when paired with matchmaker, enables the robust generation of constrained locally injective maps.  <ref type="table">/2, 1),</ref> and<ref type="table">(b/2, 1) resp.</ref>, where h = 2 -53 (The illustration is not to scale.)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A PROOFS</head><p>The proof of the existence of the collapse sequence for two-dimensional manifold meshes can be found, e.g., in <ref type="bibr">[Mijatovi&#263; 2003</ref>], where it is derived from the shellability of two-dimensional manifold meshes homeomorphic to a disk (i.e., the possibility of removing triangles one-by-one, keeping the topology of the remaining part of the mesh unchanged), and make use of a speciic composition of Pachner moves equivalent to edge collapse. We present a diferent proof, based on proving the existence of a collapsible edge, which is aligned with the structure of our algorithm and helps us to show the existence of the inverse vertex split sequence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.1 Existence of the Collapse Sequence</head><p>We assume that the input mesh connectivity (V 0 , F 0 ) is manifold, i.e., each edge is shared by no more than two triangles, and the triangles incident at a vertex can be arranged in a sequence so that two sequential triangles share an edge. For interior vertices, the sequence is circular, i.e. the irst and last triangles also share an edge. With the topology of a 2D disc, the graphs of edges of such meshes are planar i.e., can be embedded in the plane, with positions P 0 = {p i &#8712; R 2 } assigned to vertices v i . By the F&#225;ry's theorem, <ref type="bibr">[F&#225;ry 1948</ref>] there is a straight-edge embedding of this graph in the plane with non-intersecting edges (F&#225;ry embedding). In subsequent lemmas, we use geometric images of vertices and edges under this embedding. Only the existence of this embedding, but not the speciic construction, is used to prove the existence of the collapse sequence.</p><p>The following sequence of lemmas focuses on the one-ring neighborhood of an interior vertex, and shows that at least one of the adjacent edges satisies the link condition. This observation further leads to Theorem A.8: a valid sequence of edge collapses can be used to reduce the mesh to a mesh with a single interior vertex. A vertex of the mesh is interior if it does not lie on the boundary, and an edge is interior if its two endpoints are interior vertices.</p><p>Deinition A.1. An interior edge v i v j satisies the link condition if |N i &#8745; N j | = 2, where N i is the set of the adjacent vertices of v i .</p><p>Lemma A.2. Let v 0 be an interior vertex of degree d (Figure <ref type="figure">14</ref>). We enumerate its neighbors counterclockwise around the vertex (using F&#225;ry embedding), denoting them v 1 , v 2 , . . . , v d . Assume v 0 v 1 violates the link condition, i.e.,</p><p>Proof. Without the loss of generality, consider &#8710;v 0 v 1 v k orients counterclockwise. Consider the segment v 0 v i , for 1 &lt; i &lt; k. The half-line starting at v 0 and containing this segment is between halflines containing v 1 and v k , because the vertices were numbered counterclockwise. Therefore the half-line contains a point in the interior of &#8710;v 0 v 1 v k , by continuity. If v i is outside or on the boundary of &#8710;v 0 v 1 v k then the half-line connects an interior and non-interior point diferent from v 0 , and intersects v 1 v k . which contradicts the assumption on the embedding. Thus, v i is in the interior of &#8710;v 0 v 1 v k .</p><p>32:12 &#8226; Hanxiao Shen, Zhongshi Jiang, Denis Zorin, and Daniele Panozzo interior vertex forms a complete boundary loop. As we assume the mesh to be simply connected, then there is only one boundary loop. So the whole boundary has to coincide with the link of any interior vertex, from which it follows that there is only one. &#9633;</p><p>We remark that when there is only one interior vertex left, if the boundary vertices v i , i &#8805; 1, are assigned positions p i , so that they form a star-shaped simple polygon, there is a position (within the interior of the kernel of the boundary) p 0 for the remaining interior vertex v 0 that results in a valid straight-edge embedding.</p><p>As a result of sequentially collapsing edges, we obtain a sequence (V i , F i , C i ), i = 1, 2, . . . , k with the following properties:</p><p>) is a valid triangulation, boundary vertices are the same for all V i , and C i is a valid collapse.</p><p>Proposition A.9. Suppose the vertices v i of the disk-topology manifold mesh (V , F ) are assigned parametric positions p i in the plane, and the map is bijective on the boundary so that the triangles all have positive orientation. Then the sum of the angles of triangles incident at an interior vertex is 2&#960; .</p><p>Proof. Assign, e.g., unit length to all edges of the mesh; this associates a a surface M with the mesh, with each combinatorial triangle corresponding to an equilateral triangle. Then the positions p i deine a PL map from M to the plane. By Theorem 1 from <ref type="bibr">[Lipman 2014</ref>], this map is globally bijective; the statement of the proposition immediately follows. &#9633;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.2 Vertex split</head><p>Deinition A.10. Let (V , F ) be a mesh with a valid straight-edge embedding in the plane given by vertex positions P. Consider a closed fan of triangles F (v 0 ; v 1 . . . v d+1 ) centered at an interior vertex v 0 , with v d +1 = v 1 , and an open sub-fan F (v 0 ; v 1 . . . v k ). Vertex split introduces a new vertex v m with a position p m , F (v 0 ; v 1 . . . v k ), replaces it with a fan F (v m ; v 1 . . . v k ), and adds triangles &#8710;v 0 v 1 v m and &#8710;v 0 v m v k . We denote such a split S by (v m , p m , F (v 0 ; v 1 . . . v k )).</p><p>the connectivity of the mesh obtained by applying the split is identical to the mesh that the collapse was applied to.</p><p>The following lemma establishes that we can perform a split reversing any collapse while maintaining the validity of the embedding, if the initial embedding is valid.</p><p>Lemma A.11. Consider a fan of triangles F (v 0 ; v 1 . . . v k ) (Figure <ref type="figure">5</ref>), with positive signed areas {A(&#8710;v 0 v i-1 v i )} 1&lt;i &#8804;k and with angles of triangles incident at v 0 summing up to 2&#960; . The kernel of the fan has a non-empty interior. Then a new vertex position p m corresponding to a new vertex v m located in the interior of the fan, can be split of p 0 , so that min 1&lt;i &#8804;k A(&#8710;v</p><p>and angles of triangles in both resulting fans at v 0 and v 1 sum up to 2&#960; .</p><p>Proof. Deine a function f (p x ) = min i A(&#8710;v x v i-1 v i ). This is a continuous function of the coordinates p x of the point v x . Because f (p 0 ) &gt; 0, there is a disk B(p 0 , &#949;), of radius &#949; &gt; 0, such that f (p x ) &gt; 0 for any p x &#8712; B(p 0 , &#949;), i.e. for all i, A(&#8710;v m v i-1 v i ) &gt; 0, if we pick p m inside B(v 0 , &#949;). Suppose we initially place p m at p 0 , with new triangles added as a result of the split having zero angles at v 1 and v k . We note that the for each of v 0 and v 1 in this degenerate coniguration the angles of incident triangles sum up to 2&#960; . The angles of triangles also change continuously as functions of vertex position p x , so does their sum. On the other hand, if each triangle remains positively oriented (A(&#8710;v m v i-1 v i ) &gt; 0), then the sum of the angles can only change discretely, and has to be of the form 2&#960;n, n &#8712; Z (n-fold cover). We conclude that n has to remain one, as it is one for the initial position.</p><p>Consider the intersection C of the half-planes bounded by lines containing p 0 p 1 and p 0 p k (for each segment, we choose the half-line on the side of the interior of the fan). If</p><p>The following theorem is a straightforward application of Lemma A.11.</p><p>Theorem A.12. Suppose we have a sequence of valid collapses (V i , F i , C i ), i = 0 . . . N -2, where N = |V 0 |, the number of interior vertices in the initial mesh, all (V i , F i ) i = 0 . . . N -1 are 3-connected planar, and the last mesh (V N -1 , F N -1 ) with a single interior vertex has a valid straight-edge embedding in the plane with vertex positions P N -1 . Suppose C i = (v i m , F (v i+1 0 , v i+1 1 . . . v i+1 k )). Then the sequence of vertex splits S i that are inverses of C i , results in a valid straight-edge embedding of (V 0 , F 0 ), given by vertex positions P 0 .</p><p>Proof. V N -1 , F N -1 with positions P N -1 is valid by assumption. Each step of vertex split with S i results in a straight-edge embedding of (V i , F i ) by Lemma A.11. By induction, the embedding of (V 0 , F 0 ) obtained by the sequence of splits reverting the sequence of collapses is a straight-edge embedding. MatchMaker <ref type="bibr">[Kraevoy et al. 2003</ref>] tackles the problem of planar parametrizaton of a surface with hard positional constraints. The general idea is to decompose the mesh into patches and map them to convex domains using Tutte embedding to generate a bijective  parametrization. We briely summarize the pipeline of the original MatchMaker method.</p><p>Triangulate Virtual Boundary. Use a conventional unconstrained parametrization method to get a planar mesh M 0 (the dark region in Figure <ref type="figure">16(b)</ref>). Embed M 0 in a rectangular bounding box, and use the constrained Delaunay triangulation to triangulate the region between the planar mesh and the virtual boundary, to obtain M &#8242; 1 (Figure <ref type="figure">16</ref>(a)). Similarly, create a constrained Delaunay triangluation of the same bounding box with hard constraints at prescribed position using <ref type="bibr">[Shewchuk 1996]</ref>, to obtain the mesh in Figure <ref type="figure">16 (a)</ref>. We call this mesh M 1 .</p><p>Matching Patches. For each internal edge of M 1 , trace an edge path between the two corresponding points on M &#8242; 0 using Dijsktra's shortest path algorithm. This process is performed sequentially, one edge each time. Every new paths traced must meet the following criteria: (1) it does not intersect with any other paths; (2) it does not block necessary future paths. This is achieved using the minimum spanning tree as described in Section 4. Steiner points may be needed to make sure a valid path can always be found (red dot in Figure <ref type="figure">17</ref> (c)). As the result of this step, extended planar mesh M &#8242; 0 is subdivided into patches, and every patch is matched with a triangle in M 1 .</p><p>Embedding. At the previous step, the problem is reduced to mapping a patch of a planar mesh to a convex boundary, i.e., a problem solved by Tutte embedding <ref type="bibr">[Tutte 1963</ref>]. Additional edge splitting operation is needed to match the boundary vertices number of triangles in M 0 to vertices number on corresponding paths in M &#8242; 1 , as shown in Figure <ref type="figure">16 (b)</ref>.</p><p>Smoothing and Post-processing. The parametrization generated at the embedding stage can be optimized using any technique preventing triangles from changing their orientations.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0"><p>ACM Trans. Graph., Vol. 38, No. 4, Article 32. Publication date: July 2019.</p></note>
		</body>
		</text>
</TEI>
