Articles

Algorithms via geometry

Configuration spaces and their geometry

For a fixed natural number n and manifold X, the configuration space \mathrm{Conf}^n X is space of all tuples of n distinct points in X, i.e. we may think of it as the open manifold X^n - \Delta, where here $\latex \Delta$ denotes the extended diagonal consisting of all n-tuples where the values of at least two coordinates coincide.

Configuration spaces arise naturally in the consideration of certain problems in physics, and more recently and more concretely in robotics: here, for instance, the manifold X may be taken to be the space our n robots are moving around, and then \mathrm{Conf}^n X represents all the possible combinations (“configurations”) of positions which the robots can occupy at any point in time. A path in $latex \mathrm{Conf}^n X$ then represents a set of possible simultaneous trajectories, and so the study of the configuration space and its properties becomes useful to motion planning.

More broadly, we may consider more general configuration spaces which represent the possible positions, or more general states, of a given robotic (or other) system, and think of paths in these configuration spaces as ways for the system to move from one position / state to another.

Sometimes the configuration spaces can be CAT(0), or even cubulated (i.e., in this context, made homeomorphic or maybe even bi-Lipschitz to a CAT(0) cube complex), and that can be convenient for developing algorithms to solve corresponding problems in robotics.This is the case, for instance, for a model robotic arm operating in a tunnel.

A slightly different application in which non-positively curved configuration spaces have also made an appearance is to phylogenetic trees.

Comparisons in image spaces

Suppose we have a collection of images of comparable two-dimensional surfaces, say neural images of the exterior surface of the human brain, or shapes of various animal bones, and we wish to compare them in some way, or perhaps obtain some “best match” mapping between a given neural image and some standard structural template/s.

This sounds very much like the question asked and addressed by Teichmüller theory: what are the natural maps between two different geometries on the same topological surface? In the case of higher-genus surfaces, the answer that Teichmüller theory gives is “extermal quasiconformal maps”, and more generally, and indeed something which forms an important part of the backdrop for Teichmüller theory in the first place, uniformization tells us that any compact surface is conformally equivalent to a sphere, torus, or hyperbolic surface.

Hence conformal maps and their geometry become natural tools in this sort of shape analysis, and quasiconformal maps (and possibly Teichmüller theory) can come into the picture when either discrete approximations or higher-genus surfaces are involved. Notions of distance between shapes, based on some notion of deformation energy, or on quasiconformal constants, can be useful in making quantitative statements such as “this shape is closer to model shape A than to model shape B.”

Cryptography from geometric group theory

Non-commutative cryptography uses cryptographic primitives, methods and systems based on non-commutative algebraic structure, as opposed to most familiar cryptographic systems, which are based on the commutative algebra underlying number theory.

Many of the protocols are very similar or analogous to those in more familiar (“commutative”) cryptographic systems, the main difference being that the hard (or, in many cases, presumed-to-be-hard based on existing evidence—rigorous cryptanalysis is still in many cases an open problem) problems underlying the security of the proposed primitives and/or systems come from group theory, or the theory of other non-commutative algebraic structures.

For instance, here is a general protocol for encryption / decryption: let G be a group, and let and B be commuting subgroups—i.e. ab = ba for all a \in A, b \in B, or in other words A \subset N_G(B), B \subset N_G(A), and fix x \in G (to be made public.) Alice and Bob choose secret keys a and b from A and publish y = x^a, z = x^b as public keys.

To send an encrypted message m, Alice picks a random r \in B and sends (x^r, H(z^r) \oplus m), where is some hash function and \oplus denotes XOR.

To recover the plaintext, Bob computes z^r = (x^b)^r = x^{br} = (x^r)^b and then m = H(z^r) \oplus m \oplus H(Z^r).

In the commutative setting, where we interpret G as (some finite quotient of) a integer ring and x^r as exponentiation, the security of this protocol would depend on the difficulty of finding discrete logarithms.

In the non-commutative setting, in a somewhat egregious abuse of notation, we interpret x^r as conjugation, and then the security of the protocol would depend on the difficulty of the conjugacy search problem (i.e. given z, t \in G, find r \in G s.t. t = rzr^{-1}.)

The difficulty of the conjugacy search problem in a given group, as well as other desirable properties (from either a security or an implementation standpoint), such as efficiently solvable word problem, computable normal forms, and super-polynomial growth function, is something that is often most (or at least more) easily studied using geometric methods.

Hence some of the groups which have been suggested in the context of this application: braid groups, Grigorchuk’s group, Thompson’s group/s, Artin groups, free groups, etc.

Other (apparently, or provably) difficult problems arising in geometric group theory may also be used, e.g. subgroup isomorphism in RAAGs (although this may potentially be less tenable in light of Babai’s recent breakthrough in graph isomorphism) or subgroup distortion in hyperbolic groups.

Non-commutative cryptography is still in many ways a nascent field; there are few concrete manifestations—the Algebraic Eraser is a rare example of one—and its security is presumed but yet to be fully tested—as demonstrated by the ongoing debate over the security of the Algebraic Eraser. Perhaps partly due to the relative lack of real-world applications, and partly due to the novelty of the field, cryptanalysis and work to establish the security of proposed protocols has been relatively slow in coming, although such work does exist.

Standard
Articles

Topology as squinting

One can describe topology, in more descriptive language, as the study of coarse-grained or qualitative features of space. Instead of asking about distances and angles, one asks about connectivity, compactness, the presence of holes (as captured e.g. by homology or more subtly by cohomology), the kinds of paths that live in the space (as captured by the fundamental group), and so on.

Sometimes this loss of precise information is desirable in applications: when we are inundated with data, a more coarse-grained or qualitative perspective can help us see the forest for the trees.

Below we describe two topological tools, both built off homology, which demonstrate this guiding philosophy and how it has turned out to be useful in certain applications.

Persistent homology

Homology is the topological tool of choice in many applications. It contains information about connectedness, compactness, and holes in all dimensions, is conceptually more straightforward than cohomology, and is eminently computable using the tools of linear algebra.

In applications we often start from a point cloud (i.e. a finite set of points) sampled from what we assume to be an underlying manifold. In order to reconstruct the homology of the underlying manifold, we could e.g. the nerve of an open covering constructed by taking balls based at every point in our point cloud, but are immedaitely faced with the question: how big should we make those balls / how fine a covering should we take? Persistent homology offers a compellingly natural answer to this question: use all possible scales, at once.

What is persistent homology?

Persistent homology, as described by one of its originators (Herbert Edelsbrunner), is “an algebraic tool for measuring topological features of shapes and functions [which] casts the multi-scale organization we frequently observe in nature into a mathematical formalism.”

The basic idea is to measure homology “at all possible scales”, and keep track of how homology varies as we pass between these various scales. Below we give more precise formulations of this for Morse functions, and then for triangulable spaces.

Given a Morse function f: \mathbb{R} \to \mathbb{R} (i.e. f is smooth with non-degenerate critical points), we can consider the topology of the sublevelsets, and pair up critical points which cause connected component to be added or removed. A persistence diagram is a diagrammatic representation of this pairing (figure from Edelsbrunner-Harer):

persistd

We can generalize this idea to higher dimensions—for a Morse function f: M \to \mathbb{R} on a n-manifold M = M^n, we obtain n persistence diagrams, one in each degree 0, 1, \dots, n-1 of (nonvanishing) homology.

Alternatively, we may define persistence homology on a simplicial complex (or more generally a triangulable space) K by specifying a filtration \varnothing = K_0 \subset K_1 \subset \dots \subset K_m = K.

The work of Zomorodian and Carlsson, and many others following them, have fleshed out the theory of persistent homology considerably; the Edelsbrunner-Harer survey linked to above is a good starting reference. Some key results:

  • Persistent homology based on homology with k coefficients can be given a k[t]-module structure;
  • Persistent homology is computable in a wide range of settings.
  • Persistence is stable under perturbations (the precise formulation here depends on the setting.)

Applications

One of the first applications of persistent homology was for shape recognition, via a generalization of convex hulls known as alpha shapes. Here we start with a point cloud, and would like to deduce the underlying shape from which (we assume) it was drawn. To do this we start forming alpha shapes—but as above there is a question of which scale we should use in this process. Persistent homology resolves this question by saying that we should take all possible scales, and regard features that “persist” across a (relatively) large(r) range of scales as more likely to be “true” features of the underlying shape.

The idea of persistent homology being potentially useful wherever homology might be useful, but questions of scale exist, can also be seen e.g. in its application to homological criteria for coverage in sensor networks. Given a network of sensors which can communicate with nearby ones, one can “detect” gaps in coverage as non-trivial elements of the second homology of a flag complex with 1-skeleton a sensor communication graph (vertices are sensors, and there is an edge between two vertices if the corresponding sensor ranges overlap.)

The formulation of such a criterion is almost tautological (“there is a hole if the algebra says there is a hole”), but one advantage it has is that second homology can be computed, in a distributed fashion, by the sensors in such a locally-communicating network—while we have not done anything significant conceptually, we have reformulated our intuition in a precise, computable way. To compute it in a robust, noise-proof way which does not require precise knowledge of distance between sensors, however, requires the use of persistent homology (or other tools.)

Topological data analysis

Persistent homology has also been a key tool in topological data analysis, starting with (I believe) the work of Gunnar Carlsson and Afra Zomorodian. The intuition there is that large data sets, while extrinsically huge-dimensional, are often intrinsically low-dimensional in nature, and topology, via persistent homology or other means, provides a good way of effective dimension reduction, because, very roughly speaking, “shape matters”.

The (statistical or other) theory to back this intuition is not always present, although at least on the mathematical side there has been considerable development, driven in large part by possible applications in topological data analysis, to put the theory of persistent homology and its application to data analysis on a sound mathematical footing. Stability theorems, in particular, show that persistent homology at least has a chance of being a sufficiently robust invariant to be useful in data analysis.

There have been a number of applications of these techniques, and indeed a start-up which has made topological data analysis its killer app, although the use of topological techniques in data analysis is still from systematic: successful applications often seem somewhat ad hoc; the generic attempt to apply topological data analysis seems more likely to result in a case of “I found these cycles in my data—but what does that mean?”

Euler calculus

An easier version of homology—or, really, a collapsed or “decategorified” version of it—is the Euler characteristic, and even with part of its topological power collapsed this can still serve useful functions.

For instance, one may notice that the Euler characteristic obeys something that looks like an inclusion-exclusion principle: \chi(A \cup B) = \chi(A) + \chi(B) - \chi(A \cap B). One can formalize this and build a theory of integration “with respect to Euler characteristic” (d\chi) by defining \int_A \chi_A \,d\chi = \chi(A) for characteristic functions of (sufficiently nice) sets, and then extending linearly and using limits.

The sense in which this is like a “squintier” version of Lebesgue integration (say) can be seen clearly in the axiom underlying that definition: instead of associating to the characteristic function \chi_A of a set A the measure of A, we associate to \chi_A the Euler charateristic of A.

The resulting theory is fairly nice—there is a Fubini theorem, Euler integrals can be efficiently computed via the Euler charateristics of level sets and related objects, &c. Not all functions can be integrated, but the ones that tend to be associated with finite point clouds can (a more precise formulation involves o-minimal structures; the functions which can be integrated are those with discrete range and definable preimages according to the o-minimal structure. The theory can be further extended to continuum-valued functions, but that introduces more technicalities, and we do not need it here.) 

Schapira and Oleg first worked the theory out in the late 1980s, and by now it has found applications in tomography, sensor networks, and so on: we briefly describe a few of these below.

An application to tomography

The Schapira inversion formula is a topological version of the inversion formula for the Radon transform. Suppose we have a compact subset T \subset \mathbb{R}^3, and we “scan” T by slicing along hyperplanes and recording the Euler characteristic of the slices of T—which, in the case of a compact set of the plane, is simply the number of connected components minus the number of holes (which equals the number of bounded connected components of the complement—a baby case of Alexander duality.) This yields a function h: \mathrm{AGr}_2(\mathbb{R}^3) \to \mathbb{Z} (the domain here is the affine Grassmannian of all planes in \mathbb{R}^3, not necessarily going through the origin.)

alexdual

Then we can recover the set T (via its characteristic function \chi_T) as follows: encode the information from the “scan” in the relation \mathcal{S} \subset \mathbb{R}^3 \times \mathrm{AGr}_2(\mathbb{R}^3) by (x,\pi) \in \mathcal{S} if x \in \pi \cap T. We may verify h is related to \chi_T by h(\pi) = \int_{\mathbb{R}^3} \chi_T(x) \chi_{\mathcal{S}}(x,\pi) d\chi(x) (a Radon transform, but with d\chi in the place of dx.)

Let  \mathcal{S}_x denote the fiber \{\pi \in \mathrm{AGr}_2 (\mathbb{R}^3) : x \in T \cap \pi\}; then \chi(\mathcal{S}_x) = 1 for all x \in \mathbb{R}^3 (each point in T appears on a compact set of affine planes) and \chi(\mathcal{S}_x \cap \mathcal{S}_y) = 0 for all x \neq y \in \mathbb{R}^3 (given x, y \in T, the affine planes which see both of them are exactly the ones which contain the line containing both of them—and there is a circle’s—or technically a \mathbb{P}^1‘s—worth of such planes.)

The Schapira inversion formula in this case then states that we have \chi_T(x) = \int_{\mathrm{AGr}_2(\mathbb{R}^3)} h(\pi) \chi_{\mathcal{S}}(x,\pi) d\chi(\pi).

Thus in particular it is sufficient to record the connectivity data on the slices to recover the original shape, if one knows enough / has sufficient assumptions on the Euler characteristics of the slices. This seems almost too good to be true—until you realize that actually knowing the Euler characteristic for every slice is still a nontrivial problem. Still, it does give us some latitude to deform which was less present in the original, non-topological Radon transform.

An application to sensor coordination

Baryshnikov and Ghrist have applied Euler integration to the problem of target enumeration via distributed sensor networks. Here the setup is this: we have a bunch of enumeration targets (cats, perhaps, or something less fanciful, such as cars) in a space X covered by a sensor field (in actual applications to be approximated by a sufficiently dense network of sensors.)

Each sensor \alpha records the total number of targets it senses in its range; the sensors can also communicate with nearby sensors (those with overlapping ranges) to exchange and compare target counts, but that is the limit of their capabilities. Given this, and that the sensor ranges overlap in undescribed ways, how should we go about recovering an accurate global count?

Let the target support U_\alpha be the subset of all x \in X s.t. the sensor at x senses \alpha, and h: X \to \mathbb{Z} is the counting function which simply returns the number of targets counted by the sensor at x. In a toy case where X = \mathbb{R}^2—so that we did not have to worry about boundary effects—and all the target supports were perfect circles of radius R, we would have \int_X h \,dx = = \int (\sum_\alpha \chi_{U_\alpha}) = \pi R^2 (\#\alpha), or \#\alpha = \frac{1}{\mu(U_\alpha)} \int_X h\,dx, where here both dx and \mu refer to Lebesgue measure.

Now, if all the target supports U_\alpha have the same nonzero Euler characteristic N (e.g. if they are all contractible, so N = 1), then we can apply exactly the same logic, but using Euler integration instead of Lebesgue integration, to show that the number of \#\alpha = \frac 1N \int_X h \,d\chi.

Hence we did not need our precise geometric hypotheses—that we had no boundary effects, and that all of our target supports were geometrically the same shape; we only needed looser, topological analogues of these hypotheses.

Standard
Articles

Why quaternions?

Points [vectors] in the plane can be represented using complex numbers, and then can be “multiplied”, i.e. can be given the structure of an \mathbb{R}-algebra. Conversely, complex numbers can be viewed as geometric objects, and operations on them given geometric meaning, by the same construction.
Is there an analogue of this for points in 3-space? Hamilton tried to find such an analogue, and was led to the quaternions. There is a (possibly apocryphal) story of how his son asked him every morning: “Well, Papa, can you multiply triplets?” and always got the same answer: “No, I can only add and subtract them”, with a sad shake of the head. What Hamilton realized was that this is indeed not quite possible … unless we embed the triplets in a 4-dimensional algebra (the quaternions)—this is what Hamilton realized that morning and inscribed on Brougham Bridge.
Quaternions are a concise method of representing the automorphisms of three- and four-dimensional spaces. They have the technical advantage that unit quaternions form the simply connected cover of SO(3). For this reason, quaternions are used in computer graphics (Tomb Raider is often cited as the first mass-market computer game to have used quaternions to achieve smooth 3D rotation), control theory, signal processing, attitude control, physics, bioinformatics, and orbital mechanics.For example, it is common for spacecraft attitude-control systems to be commanded in terms of quaternions.
 
Quaternion [algebra]s have received another boost from number theory because of their relation to quadratic forms.
Nevertheless, quaternions remain relatively obscure outside of, and even within large parts of, mathematics. Today, vector analysis—which was in fact born out of the subsequent development of quaternions—is used to do the mathematics and physics that could be done with quaternions.  Or, to steal Simon L. Altmann‘s words,
“… quaternions appear to exude an air of nineteenth century decay, as a rather unsuccessful species in the struggle-for-life of mathematical ideas. Mathematicians, admittedly, still keep a warm place in their hearts for the remarkable algebraic properties of quaternions but, alas, such enthusiasm means little to the harder-headed physical scientist.”  (1986.)

References

  • Wikipedia’s “History of quaternions” and “Quaternion
  • (see also) John Conway and Derek Smith, On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry
Standard