Questions, Reading Notes

Cryptosystems using free group automorphisms

Moldenhauer has proposed proposed cryptographic protocols which make use of \mathrm{Aut}(F_n). Here we describe them, for the fun of it, and because a potential closer analysis of their security suggests potentially interesting problems (or perhaps exercises—I don’t know enough / haven’t spent enough time to be able to judge) regarding automorphisms of free groups.

One-time pads of free group automorphisms

Choose a free group F_q, a key-space (\phi_n)_{n=1}^N consisting of some large number N (say, 2^{128}) of automorphisms of F_q, and a linear congruence generator h: \mathbb{Z}/N \to \mathbb{Z}/N with maximal period length.

To communicate securely, Alice and Bob privately agree on a free subgroup F_U with rank equal to the alphabet size, a (minimal Nielsen-reduced freely-reduced) free generating set U, and a starting seed for the linear congruence generator.

To securely send a message m = m_1 \cdots m_z, Alice generates an equally-long string of congruences k_1 \cdots k_z using h, and sends the ciphertext \phi_{k_1}(m_1) \cdots \phi_{k_z}(m_z) as an unreduced word in F_q, where we implicitly identify letters in the alphabet with corresponding generators of F_U from S. She also sends z.

To decrypt the ciphertext, Bob calculates k_1 \cdots k_z using h and then \phi_{k_r}(u) for all 1 \leq r \leq z and u \in U, and uses this to chunk the ciphertext into words corresponding to individual letters, which are then decrypted using the corresponding \phi_{k_r}^{-1}.

This effectively uses the ginormous size of \mathrm{Aut}(F_n) and its highly noncommutative nature (or, more precisely, the difficulty of guessing \phi_{k_1}, \dots, \phi_{k_z} given h and \phi_{k_1}(m_1) \cdots \phi_{k_z}(m_z), but not k_1) for cryptographic security, although the protocol itself (a one-time pad) is not terribly sophisticated.

Vague hyperbolic language aside, the security of this system really rests on how the automorphisms \phi_i are randomly chosen. Moldenhauer proposes a system which uses binary strings, of varying lengths (details unspecified here) and maps these to Whitehead automorphisms in some systematically random way.

Question: Is the proposed method of  generating the automorphisms secure, or is it vulnerable to heuristic or other attacks?

Public-key encryption with free group automorphisms

Moldenhauer also suggests a public-key protocol: here the public parameters consist of the free group F_q, a freely reduced word w \neq 1 in the free group, and an automorphism f: F \to F of infinite order.

Alice and Bob pick private keys a, b \in \mathbb{Z}_{>0}, and publish as their public keys f^a(w), f^b(w).

To securely send a message m to Bob, Alice computes the freely-reduced elements m \cdot f^a(f^b(w)) =: c and send this to Bob as the ciphertext.

Bob decrypts the message by computes c_1 \cdot (f^b(f^a(w)))^{-1} = m \cdot f^{a+b}(w) f^{-(b+a)}(c) = m.

Security here is based on the difficulty of determining how much cancellation happens. Presumably, though, not all of the plaintext is cancelled most of the time, so there will likely be some substantial chunk of plaintext just hanging out in front, which seems … not optimal.

Slightly more secure might be to have Alice send f^a(m), have Bob not immediately decrypt, but instead compute f^{b+a}(m) and send this back to Alice, and then have Alice compute f^{-a}(f^{b+a}(m)) = f^b(m). Now Bob decrypts the message by computing m =  f^{-b}(f^b(m)).

We can modify this further for use as a signature / authentication protocol: as a signature to Bob, Alice sends f^a(f^b(w)) = f^{a+b}(w) to Bob; Bob computes f^{-b}(f^{a+b}(w)) and verifies that this matches with Alice’s public key f^a(w).

The security of these modified protocols are based on the difficulty of efficiently of deducing and b or m, given only f and the images f^a(m), f^b(m) and f^{a+b}(m); i.e. the difficulty of the (analogue of the) discrete logarithm in generic infinite cyclic subgroups of the free group.

Questions: Do generic infinite-order automorphisms of the free group actually have good properties from the standpoint of these cryptographic applications? How much cancellation can we typically expect in freely reducing m \cdot f^{a+b}(w)? Moldenhauer’s arXiv preprint doesn’t appear to address these questions, although her thesis might.

Also: if Eve observes a lot of the Alice’s comunications secured using this last protocol, she will see a lot of pairs (x, f^a(x)), since f^{a+b}(m) = f^a(f^b(m)), and both f^{a+b}(m) and f^b(m) are exchanged in the protocol. How many of these would Eve need to collect before she can try to effectively reconstruct what f^a is? How much will the cancellation / free reduction created by concatenating signatures (or other material) to the end of messages alleviate this?

Standard
Reading Notes, Theorems

Convergence of power series: Cauchy-Hadamard

The domain of convergence of a general holomorphic function can be quite arbitrary—in one (complex) dimension any domain is the domain of convergence for some holomorphic function, and in higher dimensions domains of holomorphy may be characterized using certain notions of convexity: holomorphic convexity, which involves the propagation of maximal-principle-type estimates, or pseudoconvexity, which is similar but uses the larger class of plurisubharmonic functions instead of holomorphic functions

The domain of convergence of functions defined by power series, however, are much more restricted and exhibit more symmetry: in one dimension, the domain of convergence of a power series f(z) = \sum_{n=0}^\infty c_n z^n is a disc of radius R satisfying \dfrac 1R = \limsup_{n \to \infty} \left( |c_n|^{1/n} \right). This we may prove by noting that, for |z| > R with R as above, the terms of the series do not converge to zero, and hence the series cannot converge; for |z| < R, the series will converge by comparison with a geometric series. This is the Cauchy-Hadamard theorem. (On the boundary of the disc of convergence the behavior of the power series is more subtle—we can extract some information using results such as Abel’s theorem.)

There is an analogue in higher dimensions, which states that (using multi-index notation) a power series f(z) = \sum_\alpha c_\alpha z^\alpha converges in a polydisc with polyradius \rho iff \limsup_{|\alpha| \to \infty} \sqrt[|\alpha|]{|c_\alpha|\rho^\alpha} \leq 1; the proof of this is exactly analogous to the one given above. Note that the union of all such polydiscs may not itself be a polydisc–e.g. the domain of convergence of the power series \sum_{k=0}^\infty z_1^k z_2^k is precisely the set \{|z_1z_2|| < 1\}, which is not a polydisc.

Nevertheless these domains of convergence still have a certain sort of radial symmetry—they belong to the more general sort of domains known as a complete (a.k.a. logarithmically-convex) Reinhardt domains. These are domains U for which (z_1, \dots, z_n) \in U implies (e^{i\theta_1} z_1, \dots, e^{i\theta_n} z_n) \in U for all \theta_1, \dots, \theta_n \in \mathbb{R} (this is the “Reinhardt” part) and (w_1, \dots, w_n) \in U whenever |w_i| \leq |z_i| (this is the complete, or log-convex part.) Note that one-dimensional complete Reinhardt domains are precisely discs centered at the origin.

We can think of the Cauchy-Hadamard condition in several variables as giving a constraint on the polyradius \rho, so that the domain of convergence \Omega is the union of all polydiscs with polyradius satisfying the given constraint or, equivalently, the logarithmic image \mathrm{Log}(\Omega) of the domain of convergence is the set of all polyradii satisfying the given constraint. \mathrm{Log}(\Omega) is convex (and closed to the right) but \Omega need not be an orthant, as would be the case if \Omega were a polydisc.

Standard
Reading Notes

A Graphical View of Categories (II): Monos, Epis, Zeros, Groupoids

Monomorphisms and epimorphisms are special sorts of arrows: they are analogues / generalisations of injective and surjective maps (resp.)—except now we do not speak of elements! The properties of injections and surjections that we wish to abstract involve cancellation under composition: a monomorphism is a morphism which may be cancelled on the left (in compositions); an epimorphism is one which may be cancelled on the right. In graph terms: a monomorphism induces through composition a unique arrow out of its domain for each arrow out of its codomain; an epimorphism induces through composition a unique arrow into its codomain for every arrow into its domain.

An object is terminal if there is a unique arrow from it to every other object in the diagram; it is initial if there is a unique arrow to it from every other object in the diagram. You may be reminded of sources and sinks, but the notions are not quite the same: here we do not care that terminal objects may also have outgoing arrows or that initials may also have incoming arrows, and we do care about the number of arrows between our initials / terminals and each of the other vertices. An object is a zero object if it is both terminal and initial.

The inverse of an arrow is an arrow going the other way, such that composition yields identity morphisms (on both sides.)

A category all of whose arrows are invertible is a groupoid; a groupoid with one object is a group. In other words, one may represent a group as a bouquet of circles, with one loop for each group element, corresponding to the morphism given by left-multiplication by that element.

Standard
Reading Notes

A Graphical View of Categories (I)

A category is [may be thought of as] a directed graph in the sense of Serre—so an ordered 4-tuple (V, E, \iota, \tau) whose members are, resp, the vertices (objects), the edges or arrows (morphisms), the initial vertex (domain) and terminal vertex (codomain) functions—with

  • loops at all vertices (these being the identity morphisms)
  • f \to g \to h implies g \to h (composition of morphisms)

There are some subtleties involved here in trying to apply this, in full generality, to actual collections of objects of mathematical interest, due to things of a set-theoretic nature related to Russell’s paradox, but we shall brush over them here.

A functor is a morphism of categories, i.e. a map between categories which respects the structure restrictions on the edges / arrows. From Mac Lane’s Categories for Working Mathematicians (13): ‘Functors were first explicitly recognized in algebraic topology, where they arise naturally when geometric properties are described by means of algebraic invariants … The leading idea in the use of functors in topology is that H_n or \pi_n gives an algebraic picture or image not just of the topological spaces, but also of all the continuous maps between them.’

A functor is full if every arrow in the target category comes from an arrow in the source category, i.e. a surjective morphism of categories; a functor is faithful if it is an embedding of directed graphs, i.e. an injective morphism of categories.

A natural transformation between two functors S, T: C \to B, sometimes called a morphism of functors, is a function from the objects of C to arrows in B which is equivariant under the arrows in C, in other words

‘As Eilenberg-Mac Lane first observed, “category’ has been defined in order to be able to define “functor” and “functor” has been defined in order to be able to define “natural transformation”.’ (Mac Lane 18)

Standard
Reading Notes

Tangles and a Structure Theorem

The result that we discuss here is Theorem 2.1 in Kawarabayashi and Wollan’s article, originally (3.1) in Graph Minors XVI, which plays a bigger role in the subsequent proof of the graph minor theorem than the “cornerstone” Graph Decomposition Theorem, and which indeed is used in the proof of said decomposition theorem.

This structure theorem states that given a H-minor-free graph G, for every tangle \mathcal{T} in G of order \geq \theta, there exists A \subset V(G) of size at most \alpha such that G - A has an \alpha-bounded \alpha-near embedding into a surface \Sigma into which H cannot be embedded, and this near-embedding captures \mathcal{T} - A.

Here capturing the tangle means that we can find a (\mathcal{T} - A)-central segregation \mathcal{S} of G - A (i.e. a partition \mathcal{S} of G - A into edge-disjoint societies such that no society in \mathcal{S} contains the large side of any separation in \mathcal{T}) with a bounded number of large vortices and vortex decompositions of bounded depth in our near-embedding.

What is a tangle?

A tangle \mathcal{T} of order \theta is a set of ordered separations (A,B) such that for every separation \{A, B\} of G of order \leq \theta, either (A,B) or (B,A) is in \mathcal{T}, and such that G[A_1] \cup G[A_2] \cup G[A_3] \neq G for any (A_i, B_i) \in \mathcal{T} with i=1, 2, 3. In short, a tangle is a “complete” (in the sense of the first axiom) set of small-order separations which leave a substantial part of the graph on the B side (in the sense of the second axiom.)

From the Structure Theorem to the Graph Decomposition Theorem

The structure theorem here plays a key role in Kawarabayashi-Wollan’s proof of the Graph Decomposition Theorem (which is essentially the same as Robertson-Seymour’s proof [?].)

Recall our last restatement of the Graph Decomposition Theorem; this we will prove by induction on |V(G)|. The structure theorem gives us (positive) constants \hat{\theta} and \hat{\alpha} such that, for every tangle of order \geq \hat{\theta}, G is \hat\alpha-close to having an \alpha-bounded \hat\alpha-near embedding into a surface \Sigma into which H does not embed, and this near-embedding H_0, H_1, \dots, H_m captures \mathcal{T}.

Choose \theta = \max(\hat\theta, 3\hat\alpha+1) and \alpha := 4\theta - 2 (\theta is chosen so that 3 + \hat\alpha \leq \theta and \hat\theta \leq \theta hold, and \alpha so that 3\theta - 1 \leq \alpha). For the base case, we observe that we may assume |V(G)| \geq \alpha, otherwise the result is trivially true.

We now induct by decomposing each vortex H_i in our near-embedding separately and putting the pieces together. The exceptional subgraph H_0 can be left as is; the small vortices can be split along society vertices; the large vortices can be split along vortices of the parts of its vortex decomposition (see p.9 of Kawarabayashi and Wollan for details); inductively we obtain the desired tree-decompositions for each smaller part, and then piece them together by connecting one of their vertices to the new root v_0 (representing $latex G[V(H_0) \cup \hat{A}$), we may check that the torso of the new part V_0 can be \alpha-nearly embedded as desired.

Standard
Reading Notes

Vortices and Graph Decomposition

At the heart of the proof of Seymour and Robertson’s graph minor theorem lies the graph decomposition theorem (commonly referred to as the graph structure theorem), which is the main result of Graph Minors XVI and which Seymour and Robertson describe as “the cornerstone theorem of the [Graph Minors] series”. This theorem states that any graph G with no H minor can be decomposed as into graphs H_i which (almost) constitute “obvious” obstructions to a H minor because there exist surfaces \Sigma_i into which H_i can (almost) be embedded, but H cannot.

Two restatements

More precisely, any H-minor-free graph is the result of clique-sums, with join sets of size \leq\alpha, of graphs H_i which are \alpha-close to admitting a totally bounded \alpha-near embedding into a surface \Sigma into which H does not embed.

Note that a tree decomposition is precisely a clique-sum of its parts torsos along their respective torsos intersections. This motivates an even more precise re-statement of the theorem: any H-minor-free graph G has a bounded number \theta of apex vertices Z, and a rooted tree decomposition \{V_t \:|\: t \in T\} with root r such that

  1. its torsos \{G_t \:|\: t \in T\} have vertex subsets \{A_t \:|\: t \in T\} of bounded size \alpha such that G_t - A_t has an \alpha-near embedding into a surface \Sigma_t into which H does not embed;
  2. these embeddings are totally bounded, i.e. G_t - A_t is obtained from a graph embeddable in \Sigma_t by adding \leq \alpha vortices, each of path-width \leq \alpha (in particular, the path decompositions of the vortices have adhesion \leq \alpha, so that this is an indeed a strengthening of \alpha-bounded);
  3. for adjacent vertices t, t' \in T,where t \in rTt' (i.e. t is the parent of t'), V_t \cap V_{t'} \subset A_t (i.e. the overlap sits inside the excluded vertices of the parent), and V_t \cap V_{t'} \subset A_t \cup X_t where X_t is either two consecutive parts of a vortex decomposition in G_t - A_t, or an embedded clique of size at most 3 (i.e. the overlap almost sits inside the excluded vertices of the child, modulo two controllable special cases);
  4. the apex vertices Z sit inside A_r.

where \alpha and \theta depend only on the excluded minor H.

This last is the form in which Kawarabayashi and Wollan prove the theorem using induction on |V(G)|.

What is a vortex?

The original motivation for this structure theorem is the result from Graph Minors V, inspired by the earlier result of Dirac and Duffin characterising K_4-minor-free graphs as series-parallel graphs, that states that for H a planar graph, every H-minor-free graph has tree-width bounded by some number which is a function of H only. Additional elements, however, need to be introduced to generalize to the case where H is not planar:

  1. In particular, drawing on Wagner’s characterisation of K_5-minor-free graphs as graphs with tree-decompositions whose parts are either planar or isomorphic to V_8 (the graph obtained by joining opposite vertices of a 8-cycle), we allow the building blocks of our tree-decomposition to be arbitrary graphs of bounded genus.
  2. Adding a single vertex connected to every vertex to a large grid produces a K_6-minor-free graph of large genus; similarly adding k such vertices produces a K_{k+5}-minor-free graph or large genus. Hence we really want our building blocks to be arbitrary graphs of bounded genus plus a bounded number of arbitrarily-connected vertices (such vertices are known as apex vertices, from the image of the archetypal single vertex connected to everything on a grid.)
  3. There is one more sort of obstruction that must be taken into account, and these are vortices …

A vortex is, roughly speaking, a site of limited complexity where we allow arbitrary violations of bounded genus; they differ from apex vertices in that their complexity is limited not by the size of their vertex set, but by (essentially) their path-width. The archetypal vortex is a planar graph with edges added between every pair of vertices at distance 2 in the boundary of its infinite face—such a graph has no K_8 minor (Seese and Wessel, cited in Graph Minors XVI, 45), but cannot be obtained (as a family) by adding a bounded number of apex vertices to graphs of bounded genus.

More precisely, a vortex is a pair (G, \Omega) where G is a graph and \Omega is a cyclic ordering of some \bar{\Omega} = \{v_1, \dots, v_n\} \subset V(G) such that we can find a partition of G into edge-disjoint subgraphs H_1, \dots, H_n satisfying

  1. v_i \in V(H_i) and for every edge e \in G, we have $latex e \in H_j$ for some j.
  2. \{v_j : v \in V(H_j)\} \cap \bar{\Omega} =: \Omega_j forms a segment or circular interval of  \bar{\Omega}, i.e. if v_m, v_n \in \Omega_j, then v_ k \in \Omega_j whenever m \leq k \leq n.

(Notational asides: the pair (G[\bar{\Omega}], \Omega) is known as the society of the vortex; H_1, \dots, H_n is a vortex decomposition of (G, \Omega).)

The depth \alpha of a vortex is the maximum of |V(H_i) \cap V(H_j)| over all choices of i, j.

One way to build a vortex constructively (which also generalises in a visible way from the archetypal example given above) is to start with a cycle C, consider a finite list \Lambda of segments on that circle, add a vertex v_\gamma for each segment \gamma \in \Lambda which is arbitrarily connected to the vertices of C, and then add an edge between v_\gamma and v_{\gamma'} if \gamma \cap \gamma' \neq \varnothing. If no vertex of C appears in more than \alpha intervals of \Lambda, we have a vortex of depth \alpha. This construction corresponds to what are called fringes of depth \alpha in Lovász’s survey. If C was the boundary of a face in an embedding of a graph G, then we say that the graph resulting from this construction was obtained by adding a vortex of depth \alpha to G.

One way to visualize what the graph structure theorem is saying is provided by this awesome picture drawn by Felix Reidl:

Hierarchy

(here the things sticking out are apex vertices, and the circular things on the surface that are not holes are vortices.)

References

For definitions of terminology which appears but is not defined here see e.g. Kawarabayashi and Wollan’s article, or the chapter on Minors in Diestel’s book. See also these slides from Daniel Marx, which are good for aiding intuition.

Standard