Notes on Broken Time Symmetry

The following is a collection of notes (to myself) concerning the book 'Fully Chaotic Maps and Broken Time Symmetry' by Dean Driebe (Kluwer Academic Publishers, 1999). This is not a book review.
Chapter three develops three different representations of the Bernoulli map. Unfortunately, Driebe doesn't use the word 'representation', but he should have. In traditional mathematical study, such as group theory, the word 'representation' refers to the idea that e.g. a group, can be 'represented' by an algebra of linear matrices that is homeomorphic to the group. Note that different representations have different eigenvalues and eigenvectors (e.g. SU(2) and O(3)), and even a different number of eigenvectors.

If we keep this in mind, then it should not be a surprise that the eigenfunctions of section 3.3 and 3.5 can't be directly related. This is not merely a 'change of basis' that takes us from one to the other. These are really distinct 'representations'. With this viewpoint, the following questions arise:

  1. How many other 'representations' are there for the Bernoulli map? How do we know we have found all of them?

  2. In order to stop using the quotes around 'representation', we should more clearly define the abstact 'group', and the the homeomorphisms between that and the developments of section 3.3, 3.5. Is there a 'homeomorphism' between these two? Is it 1-1? Onto? What are the properties? Are the continuous functions of one mapped into a kernel of the other? If not, what is the mapping?

  3. The 'physical' representation of section 3.5 is physical because of the nice differentialble eigenfunctions. That of section 3.3 is 'not physical' because of the lack of differentiability/continuity of the eigenfunctions, (and is rather reminiscent of the chaotic point dynamics). We know that we should use 'point dynamics' when discussing the chaotic behaviour of point systems, but that we should use the continuous case when discussing the statistical mechanics of the same system. But when we make this switch, what really happened? This is actually the same question as the one above, but in 'physics' terms. For example, if we have 10 atoms in a box, we want to think in terms of particle dynamics. If we have 1 billion, we want stat mech. Its a sea-change. But in the Bernoulli map, we have an exactly solvable system that allows us to bounce between the two 'representations'. That's why we want to study more carefully the 'morphism' between the two 'representations': it may illuminate the general principles that underlie the transition from particle to stat mech viewpoints, and in particular, the exact nature of the 'loss' of information in the statistical viewpoint.

Generalized Functions
Don't be mislead by the great stock put into the 'left generalized eigenstates' of section 3.5.2. They're really not that fantastic, and with 20/20 hindsight, could have been guessed, 'trivially'. Here's why. In (undergrad) textbook physics, things happen in a Hilbert space where eigenfunctions are in a sense 'trivially self-dual' because they are self-orthogonal. For example, the sines and cosines of teh fourrier series are orthogonal to each other; Lagendre, Laguerre, Hermite, Chebysheff polynomials are all 'self-dual' with respect to some measure (weight). The textbooks end there: I, at least, forgot that the universe is bigger than that.

The Bernoulli polynomials have a dual that is quite completely different: some generalized functions. However, that dual has all the right properties that we expect:

$ 1 = \sum_{n=0}^\infty |B_n> is the unit operator, as always, and

$ \delta_{m,n} = $

is the orthogonality relationship, as always. What is so unusual here is that $ = B_n(x) $ is a Bernoulli polynomial, whereas its dual is not a polynomial at all, and not even an ordinary function, but is the generalized function

$ = (-)^n / n! [\delta^{(n-1)} (x-1) - \delta^{(n-1)} (x) ]

or equivalently

$ = (1/n!) d^n/dx^n $

The proof for teh first equation is that its a fancy way of expressing the Euler-Maclaurin expansion: for function f(x), we recognize that $ f(x) = \sum_{n=0}^\infty $ is just Euler-Maclaurin. See, for example, Abramowitz & Stegun equation 23.1.32, and set m=1 and $ p=\infty $ (i.e. we assumed f(x) was infinitly differentiable.)

So, once we found that the right eigenstates in 3.5.1 were Bernoulli polynomials, then we should have 'known' the result of 3.5.2: the 'dual' of a Bernoulli polynomial is given by the generalized functions that make the Euler-Maclaurin series possible. At first, there seemed to be something 'magical' happeinging with the derivation of the left eigenstates. But with this hindsight, this now seems rather mundane.

Questions/Work items:

  1. Build a table of (generalized function) duals to other well-known non-self-dual polynomials. For example, we start with the Appell (Scheffer A-type zero) polynomials as given by R.P. Boas, Jr. and R.C. Buck, Polynomial Expansions of Analytic Functions Springer-Verlag, 1964.

  2. Are there any text-book level, garden-variety problems that have Bernoulli or Appell polynomial eigenstates, something simple enough for a first-year-grad-level textbook on e.g. quantum mechanics? How many quantum problems go unsolved because the above non-self-dual Hilbert space is not common knowledge?

Divergent Series
Another bit that seems 'magic' at first is the manipulations of the formally divergent series of section 3.5.2. This only seems magic because of the general lack of familiarity of physics students with formally divergent series. For example, the Euler series

w(z) = \sum_{q = 0}^{+\infty} q! z^{q+1}

is formally divergent; but for Re(z) < 0, its just the nice, analytic exponential integral in hiding. Keywords: Borel resummation, Gevrey development. There is a theorm (Gevrey's theorm ???) that the analytic function is uniquely determined by the formally divergent series, and thus it is 'safe' to confuse the formally divergent series and its finite, analytic resummation.

Questions/Work items:

  1. Quantium mechanics is rife with formally divergent series (that's what renormalization is about). Just how general can one make this 'gevrey theorm'? General enough to prove that the finite, renormalized solutions of formally divergent series are unique?
  2. Many physicists (e.g. Roger Penrose) express a certain distaste for things like QED because of the occurance of formally divergent expressions, and conclude that there must be some other theory (e.q. quantum gravity) that will be finite, with no need for renormalization. But in fact, is this a chimera, due to the lack of familiarity of physicists with the machinery of formally divergent sums and thier role as asymptotic expansions of analytic functions? Is the fact that QED works so well, and gives 12 decimal places with only 5th or sixth order corrections just a manifestation of the 'hyperconvergence' generally seen in formally divergent sums?

Sept 2001, Linas Vepstas
Copyleft (c) 2001 Linas Vepstas