We study Vietoris-Rips and Cech complexes of metric wedge sums and metric gluings. We show that the Vietoris-Rips (resp. Cech) complex of a wedge sum, equipped with a natural metric, is homotopy equivalent to the wedge sum of the Vietoris-Rips (resp. Cech) complexes. We also provide generalizations for certain metric gluings, i.e. when two metric spaces are glued together along a common isometric subset. As our main example, we deduce the homotopy type of the Vietoris-Rips complex of two metric graphs glued together along a sufficiently short path. As a result, we can describe the persistent homology, in all homological dimensions, of the Vietoris-Rips complexes of a wide class of metric graphs.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2009 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Exact Algorithms for Partial Curve Matching via the Fréchet DistanceKevin Buchin, Maike Buchin, and Yusu WangKevin Buchin, Maike Buchin, and Yusu Wangpp.645 - 654Chapter DOI:https://doi.org/10.1137/1.9781611973068.71PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Curve matching is a fundamental problem that occurs in many applications. In this paper, we study the problem of measuring partial similarity between curves. Specifically, given two curves, we wish to maximize the total length of subcurves that are close to each other, where closeness is measured by the Fréchet distance, a common distance measure for curves. The resulting maximal length is called the partial Fréchet similarity between the two input curves. Given two polygonal curves P and Q in IRd of size m and n, respectively, we present the first exact algorithm that runs in polynomial time to compute ℱδ(P, Q), the partial Fréchet similarity between P and Q, under the L1 and L∞ norms. Specifically, we formulate the problem of computing ℱδ(P, Q) as a longest path problem, and solve it in O(mn(m + n) log(mn)) time, under the L1 or L∞ norm, using a "shortest-path map" type decomposition. To the best of our knowledge, this is the first paper to study this natural definition of partial curve similarity in the continuous setting (with all points in the curve considered), and present a polynomial-time exact algorithm for it. Previous chapter Next chapter RelatedDetails Published:2009ISBN:978-0-89871-680-1eISBN:978-1-61197-306-8 https://doi.org/10.1137/1.9781611973068Book Series Name:ProceedingsBook Code:PR132Book Pages:xviii + 1288
Neuroscientific data analysis has traditionally relied on linear algebra and stochastic process theory. However, the tree-like shapes of neurons cannot be described easily as points in a vector space (the subtraction of two neuronal shapes is not a meaningful operation), and methods from computational topology are better suited to their analysis. Here we introduce methods from Discrete Morse (DM) Theory to extract the tree-skeletons of individual neurons from volumetric brain image data, and to summarize collections of neurons labelled by tracer injections. Since individual neurons are topologically trees, it is sensible to summarize the collection of neurons using a consensus tree-shape that provides a richer information summary than the traditional regional 'connectivity matrix' approach. The conceptually elegant DM approach lacks hand-tuned parameters and captures global properties of the data as opposed to previous approaches which are inherently local. For individual skeletonization of sparsely labelled neurons we obtain substantial performance gains over state-of-the-art non-topological methods (over 10% improvements in precision and faster proofreading). The consensus-tree summary of tracer injections incorporates the regional connectivity matrix information, but in addition captures the collective collateral branching patterns of the set of neurons connected to the injection site, and provides a bridge between single-neuron morphology and tracer-injection data.
The Gromov-Hausdorff distance is a natural way to measure the distortion between two metric spaces. However, there has been only limited algorithmic development to compute or approximate this distance. We focus on computing the Gromov-Hausdorff distance between two metric trees. Roughly speaking, a metric tree is a metric space that can be realized by the shortest path metric on a tree. Any finite tree with positive edge weight can be viewed as a metric tree where the weight is treated as edge length and the metric is the induced shortest path metric in the tree. Previously, Agarwal et al. showed that even for trees with unit edge length, it is NP-hard to approximate the Gromov-Hausdorff distance between them within a factor of 3. In this paper, we present a fixed-parameter tractable (FPT) algorithm that can approximate the Gromov-Hausdorff distance between two general metric trees within a multiplicative factor of 14.
Interestingly, the development of our algorithm is made possible by a connection between the Gromov-Hausdorff distance for metric trees and the interleaving distance for the so-called merge trees. The merge trees arise in practice naturally as a simple yet meaningful topological summary (it is a variant of the Reeb graphs and contour trees), and are of independent interest. It turns out that an exact or approximation algorithm for the interleaving distance leads to an approximation algorithm for the Gromov-Hausdorff distance. One of the key contributions of our work is that we re-define the interleaving distance in a way that makes it easier to develop dynamic programming approaches to compute it. We then present a fixed-parameter tractable algorithm to compute the interleaving distance between two merge trees exactly, which ultimately leads to an FPT-algorithm to approximate the Gromov-Hausdorff distance between two metric trees. This exact FPT-algorithm to compute the interleaving distance between merge trees is of interest itself, as it is known that it is NP-hard to approximate it within a factor of 3, and previously the best known algorithm has an approximation factor of O(sqrt{n}) even for trees with unit edge length.
Topological data analysis (TDA) has emerged recently as a viable tool for analyzing complex data, and the area has grown substantially both in its methodologies and applicability. Providing a computational and algorithmic foundation for techniques in TDA, this comprehensive, self-contained text introduces students and researchers in mathematics and computer science to the current state of the field. The book features a description of mathematical objects and constructs behind recent advances, the algorithms involved, computational considerations, as well as examples of topological structures or ideas that can be used in applications. It provides a thorough treatment of persistent homology together with various extensions – like zigzag persistence and multiparameter persistence – and their applications to different types of data, like point clouds, triangulations, or graph data. Other important topics covered include discrete Morse theory, the Mapper structure, optimal generating cycles, as well as recent advances in embedding TDA within machine learning frameworks.
Let f be a continuous function defined on a topological space. As we increase the function value, connected components in the level set of f appear, disappear, merge, or split, and the Reeb graph tracks these changes. The join and split trees of f track the changes of connected components in the sublevel and superlevel set respectively. If the Reeb graph is loop-free, then it is a tree called the contour tree. Given a piecewise linear function f defined on a simplicial complex K, Carr, Snoeyink and Axen proposed a simple and elegant algorithm to compute the contour tree of f by "merging" its join and split trees in near-linear time.
p q Specifically, note that the pre-image of Arc(pq) is the evolution of a single contour till it either splits, merges with another one, or is destroyed. We call this the topological component C(pq) corresponding to Arc(pq). See the right figure for an illustration, where the shaded region in M is C(pq). We can obtain C(pq) by computing the contours passing through p and q and cutting and refining any triangle intersecting these contours. We then compute any path in C(pq) from p to q by a depth first search on the edges in C(pq), and use this path as π(p, q). Note that some edges in π(p, q) may be the refined edges along the contours. For each Reeb graph arc Arc(pq), the computation of C(pq) as well as of a pre-image path π(p, q) takes O(n) time. Since there are O(n) number or arcs in the Reeb graph, it takes O(n2) time to compute a path in the pre-image for every Reeb graph arc. Given a Reeb graph loop ce, we compute a loop γi in M by simply assembling the pre-image paths for all arcs in ce. This step takes O(ng) time for all g Reeb graph loops. We remark that the loops in γis may contain edges produced by locally cutting a triangle with a contour. However, later in Section 3.2, we map these edges to the original mesh edges in M before we perform tightening of handle / tunnel loops. Hence, the final loops output by our algorithm consist of only edges from input mesh.