## Infinitesimal generator of the Fourier transform

Let’s define the Fourier transform of a function as:

(1) |

The upshot of this post will be to show (at least from a non-rigorous physicists’ perspective), that **the Fourier transform can be expressed as an exponential of a differential operator** (also called a hyperdifferential operator):

(2) |

So, in a way, the differential operator (which is, incidentially, essentially the well-known Hamiltonian of the harmonic oscillator) is the infinitesimal generator of the Fourier transform.

Note that this also generalises naturally to a fractional Fourier transform, as

All this is obviously not a new result, I first saw this trick when reading this 1977 paper by K. B. Wolf but presumably there are earlier references – do leave me a comment if you know the original source. Now on to where this equation is coming from…

## Spatial studies

After a few hours of tinkering around in Python, here’s the result (i.e. a random sample of the result):It certainly lacks the stringency of the original sketches, but hey, it’s animated 🙂

If you’re interested in the Python source, let me know!

## Fluctuations of the time-weighted average price

Motivated by an interview question posed to a friend of mine recently, today I’d like to talk about the **time-weighted average price** (**TWAP**) of a financial asset (e.g. a stock).

Let’s consider the **price **of our stock a short time scale (e.g. **intraday**) corresponding to the interval . Its price will start at and evolve through to ; the time-weighted average price is

defined in this way is interesting, since this is the **effective price** obtained when executing a large order in the market by splitting it into smaller chunks and executing them throughout the day at a constant rate. This TWAP algorithm is one of the basic algorithmic execution tools used e.g. by asset managers to minimize market impact costs. More sophisticated ones include volume-weighted average price (VWAP) where one would adjust the volume executed at each time during the day proportionally to the typical trading volume at that time.

With this motivation, let us look at the **statistics of the TWAP** , and especially its fluctuations. Usually, one approximates the logarithm of the stock price by a brownian motion (corresponding to the assumption, that *relative* price changes are independent and identically distributed). However, on small time scales (when fluctuations are small), the compounding effect is negligible. For simplicity, I will hence assume that is a brownian motion with drift and variance . This means that *absolute* price changes are i.i.d..

## Moments of the TWAP

The Gaussian process is fully determined by its first two moments,

From this, we obtain the first moments of the TWAP (for simplicity I subtracted the trivial component):

In other words, the **variance of the TWAP relates to the variance of the intraday stock price change** as

( denotes connected expectation values here).

This procedure can easily be continued to calculate higher moments of the TWAP in terms of and . In fact, given that the linear combination of Gaussians is again a Gaussian, we can directly infer that the TWAP has a Gaussian distribution with mean and variance .

## Distribution of the TWAP via the MSR formalism

There is another way to obtain the full distribution of the TWAP directly, via the Martin-Siggia-Rose formalism. Using the MSR approach, we know that the generating function of any linear functional of the Brownian is

where is the solution of

,

Applying this to a constant , we get . From this follows the **generating function of the TWAP **

This is clearly the generating function of a Gaussian distribution with the same moments as computed above.

## Directed Polymers on hierarchical lattices – Tails of the free energy distribution

A **directed polymer in a random medium** is a popular model for studying how an elastic interface in a random energy landscape becomes **rough on large scales**. On a cartesian lattice in 1+1 dimensions, the roughness exponent is known exactly (see e.g. my earlier introduction to directed polymers) and, in some cases, even the distribution of the free energy can be computed (see my earlier post on simulating the Tracy-Widom distribution). In higher dimensions, analytical results on the scaling exponents of directed polymers are scarce. However, numerical simulations indicate that non-trivial scaling exponents persist above 1+1 dimensions (see ref. [1] below and many more).

Instead of 1+d dimensional cartesian space, one can consider a directed polymer on any other lattice. In particular, **hierarchical lattices** (as introduced e.g. in ref. [2] below) turn out to be interesting. They exhibit some of the phenomena known from directed polymers on cartesian lattices (e.g. a phase transition between a rough and a high-temperature phase for sufficiently high dimensions), and can be approached analytically. In the following, I’ll show (following ref. [2]) the exact recursion relation for the free energy distribution of a directed polymer. I’ll then deduce some exponent relations for the stretched exponential tails of the free energy.

## Free energy of a directed polymer on a hierarchical lattice

Consider an hierarchical lattice with branching ratio , constructed recursively following Figure 1. At each step of the recursion, every bond of the lattice is replaced by independent branches consisting of two independent bonds each. The **branching ratio plays a role similar to the dimension** of cartesian space – as ref. [2] shows, it controls the scaling exponents and the existence of phase transitions.

Let us denote by the **free energy of a directed polymer spanned between the bottom and the top points of the hierarchical lattice** of size . At zero temperature, the polymer will choose the least-energy path between the bottom and the top point. Since the lattice of size consists of independent branches, is the maximum of the branches, each consisting of two independent sub-systems of size . Hence,

.

Let us now denote by the **cumulative probability distribution** of , i.e. . Then the recursion above translates into

.

To see this, note that is the cumulative probability distribution of and the maximum of independent branches is equivalent to taking the -th power of the cumulative distribution function.

We expect that for large , the distribution should **converge to a fixed point of the form**

,

where is the typical free energy per unit length and is the typical scale for free energy fluctuations. is the scaling exponent for the anomalous free energy fluctuations which we are interested in.

Inserting this scaling form into the recursion relation for , some simple transformations yield the following **nonlinear integral equation for the fixed point** :

. |
(1) |

In contrast to the previous recursion relation, the on both sides are now the same function (namely the fixed point distribution), and is the scaling factor indicating how the free energy fluctuations increase from one iteration to the next.

**Eq. (1) looks astonishingly simple**, but finding its fixed points and the corresponding values of is equivalent to solving exactly the directed polymer problem on an hierarchical lattice. In ref. [2], an expansion around the Gaussian case is performed. This turns out to be solvable, and the corresponding expansion for and can be obtained analytically. Other than that, not much is known on the solutions of eq. (1) – I would be very much interested in hearing of any approaches to solving or approximating it.

## Stretched exponential tails

Eq. (1) puts some interesting** constraints on the behaviour of the free energy distribution for very small and very large **. In analogy to the known solution for the directed polymer in 1+d cartesian dimensions, let us assume that in both of these limits the probability distribution is a **stretched exponential**, i.e.

for

for

Inserting these asymptotics into eq. (1), the right-hand side integral can be approximated by Laplace’s method (“saddle-point expansion”) and is dominated in each case by . In particular, for large also is large, so the approximation is consistent. We obtain:

for

for

Equating the exponents, we see (as first noted in ref. [3]) that the **exponents of the tails can be expressed in terms of and **:

(tail for )

(tail for )

As a simple check, note that these equations are fulfilled by the Gaussian limit .

It might be possible to extend the saddle-point expansion to higher orders, in order to obtain more precise results on the pre-exponential factors (let me know if you manage to!). **Nevertheless, it does not appear to put any specific constraints on ** (or, equivalently, ).

So, a challenge for the inclined reader remains – how is the scaling factor in eq. (1) fixed in terms of the branching ratio ? While it is possible that determining is just as hard as finding the actual fixed-point distribution , I’ve still not given up hope that a simpler approach or approximation might exist…

## References

[1] Marinari, E., Pagnani, A., Parisi, G., & Rácz, Z. (2002). Width distributions and the upper critical dimension of Kardar-Parisi-Zhang interfaces. Physical Review E, 65(2), 026136. https://arxiv.org/pdf/cond-mat/0105158

[2] Derrida, B., & Griffiths, R. B. (1989). Directed polymers on disordered hierarchical lattices. EPL (Europhysics Letters), 8(2), 111. http://www.lps.ens.fr/~derrida/PAPIERS/1989/griffiths-epl-89.pdf

[3] Monthus, C., & Garel, T. (2008). Disorder-dominated phases of random systems: relations between the tail exponents and scaling exponents. J. Stat Mech (2008) P01008. https://arxiv.org/pdf/0710.2198

## Stochastic Logistic Growth

A classic model for the growth of a population with a finite carrying capacity is **logistic growth**. It is usually formulated as a differential equation,

. (1)

Here is the size of the population at time , is the **growth rate** and is the **carrying capacity**.

The dynamics of eq. (1) is **deterministic**: The initial population grows (or decays) towards the constant carrying capacity , which is a fixed point of eq. (1). This is seen in the solid trajectories in the figure below:

To make this model more realistic, let’s see how we can extend it to include **stochastic fluctuations **(semi-transparent trajectories in the figure above). I’ll look in the following at a simple stochastic logistic growth model (motivated by some discussions with a friend), where the **steady state can be calculated exactly**. The effect of stochasticity on the steady state is twofold:

- The population size for long times is not fixed, but fluctuates on a scale around the carrying capacity.
- The average population size is increased above the carrying capacity , but the shift goes to 0 as increases (i.e. the deterministic model is recovered for large ).

Now let’s look at the calculation in detail…

## A stochastic, individual-based logistic growth model

In eq. (1), the population is described by a real-valued function . In reality, populations consist of **discrete individuals** and the population size doesn’t change continuously. So, a more realistic approach is to describe the population dynamics as a **birth-death process** with the following reactions:

(2)

In other words, we assume that during a time interval two events may happen:

**Birth**: With a probability , any individual may give birth to offspring, thus increasing the population size by 1.**Death due to competition**: With a probability , out of any two individuals one may die due to the competition for common resources. Thus the population size decreases by 1. Note that is related to the carrying capacity in eq. (1) by .

Note that the stochasticity in this model is not due to random external influences but due to the discreteness of the population (**demographic noise**).

## Solution of the stochastic model

The system of reaction equations (2) translates into the following **master equation** for the probabilities of the population size at time being equal to :

(3)

This looks daunting. However, it can be simplified a lot by introducing the **generating function**

After some algebra we obtain

.

This looks simpler than eq. (3) but still finding the full time-dependent solution does not seem feasible. Let’s focus on the **steady state** where . Together with the boundary conditions and , we obtain the steady-state solution

. (4)

Here we set to connect to the notation of eq. (1). Correspondingly, the steady-state probabilities for population size are

. (5)

Of course, these can equivalently also be obtained by solving the recursion relation (3) with . This result can be easily checked against simulations, see figure below.

## Steady state with stochasticity

Let’s try to understand what the steady state of our stochastic model looks like. From eq. (4) we easily obtain the first moments of the distribution of the population size. The **mean population size** is:

So for large carrying capacities , we recover the same result as in the deterministic model (which is reasonable since for large population sizes demographic noise is less pronounced!). However, for smaller **fluctuations increase the average population size**. E.g. for , the average population size in our stochastic model is .

Now let us look at the **variance**,

.

For large , we have . So we see that stochastic fluctuations **spread out the steady state **to a width around the carrying capacity.

## Estimating expected growth

Let’s take some fluctuating time series — say the value of some financial asset, like a stock price. **What is its average growth rate?** This seemingly trivial question came up in a recent discussion I had with a friend; obviously, it is relevant for quantitative finance and many other applications. Looking at it in more detail, it turns out that a precise answer is actually not that simple, and **depends on the time range** for which one would like to estimate the expected growth. So I’d like to share here some insights on this problem and its interesting connections to stochastic processes. Suprisingly, this was only studied in the literature quite recently!

## The setup

Let’s consider the price of an asset at discrete times . The corresponding **growth rates** are defined by

(1)

I.e. if , the growth rate is . Let’s also assume that our growth process is **stationary**, i.e. that the distribution of the growth rates does not change in time. Say for concreteness that the growth rates are i.i.d. (independent identically distributed) random variables.

## The arithmetic average

The most immediate idea for computing the average growth rate is just to take the **arithmetic average**, i.e.

(2)

What does this give us? Obviously, by the assumption of stationarity made above, with increasing sample size the expectation value for the growth rate at the next time step (or any other future time step) is approximated better and better by :

This seems to be what we’d expect for an average growth rate, so what’s missing? Let’s go back to our sample of asset prices and take the total growth

By the relationship between the arithmetic and the geometric mean, we know that

(3)

The left-hand side is the total growth after time periods and the right-hand side is its estimate from the arithmetic mean in eq. (2). Equality only holds in eq. (3) when all the are equal. When the fluctuate the total growth rate will always be strictly less than the estimate from the arithmetic mean, even as . **So, by taking the arithmetic mean to obtain the total growth in the price of our asset at the end of the observation period, we systematically overestimate it.** The cause for this is the **compounding** performed to obtain the total growth over more than one time step. This makes our observable a **nonlinear function of the growth rate in a single time step**. Thus, fluctuations in the growth rate don’t average out, and yield a net shift in its expectation value.

The first explicit observation of this effect I’ve found is in a 1974 paper by M. E. Blume, *Unbiased Estimators of Long-Run Expected Rates of Return*. Quantitative estimates from that paper and from a later study by Jacquier, Kane and Marcus show that in typical situations this overestimation may easily be **as large as 25%-100%**.

## The geometric average

So we see that the arithmetic mean does not provide an unbiased estimate of the compounded growth over a longer time period. Another natural way to obtain the average growth, which intuitively seems more adapted to capture that effect, is to take the **geometric average of the growth rates**:

(4)

Now, by construction, the total growth at the end of our observation period, i.e. after time periods, is correctly captured:

But this only solves the problem which we observed after eq. (3) for this specific case, when estimating growth for a **time period whose length is exactly the same as the observation time span** from which we obtain the average. For shorter time periods (in particular, when estimating the growth rate for a single time step ), the geometric mean will now **underestimate** the growth rate. On the other hand, for time periods even longer than the observation period, it will still overestimate it like the arithmetic mean (see again the paper by Blume for a more detailed discussion).

## Unbiased estimators

Considering the results above, the issue becomes clearer:** A reasonable (i.e. unbiased) estimate for the compounded growth over a time period requires a formula that takes into account both the number of observed time steps , and the number of time steps over which we’d like to estimate the compounded growth . ** For we can use the arithmetic mean, for we can use the geometric mean, and for general we need another estimator altogether. For the general case, Blume proposes the following (approximately) **unbiased estimator** :

(5)

**Eq. (5) is a reasonable approximation for the compounded growth**, not having any further information on the form of the distribution, correlations, etc.

For , i.e. estimating growth in a single time step, this gives just the arithmetic mean which is fine as we saw above. For , this gives the geometric mean which is also correct. For other values of , eq. (5) is a linear combination of the arithmetic and the geometric mean.

To see how the coefficients in eq. (5) arise, let us start with an ansatz of the form

(6)

Let us further split up the growth rates as , where is the “true” average growth rate and are fluctuations. Inserting this as well as the definitions of into eq. (6) we get

Now let us assume that the fluctuations are small, and satisfy . Expanding to second order in (the first order vanishes), and taking the expectation value, we obtain

To obtain an estimator that is unbiased (to second order in ), we now choose and such that the term of order is just the true growth rate , and the term of order vanishes. This gives the system

The solution of this linear system for and yields exactly the coefficients in eq. (5).

Of course, here we make the assumption of small fluctuations and also a specific ansatz for the estimator. If one has more information on the distribution of the growth rates this may not be the most adequate one, but with what we know there’s not much more we can do!

## Outlook

As you can see from the above discussion, estimating the expected (compounded) growth over a series of time steps is more complex than it appears at first sight. I’ve shown some basic results, but didn’t touch on many other important aspects:

- In addition to the growth rate , it is also interesting to consider the discount factor . Blume’s approach is extended to this observable in this paper by Ian Cooper.
- If one assumes the growth factors in eq. (1) to be
**log-normally distributed, the problem can be treated analytically**. Jacquier, Kane and Marcus discuss this case in detail in this paper, and also provide an exact result for the unbiased estimator. - The assumption of independent, identically distributed growth rates is not very realistic. On the one hand, we expect the distribution from which the annual returns are drawn to vary in time (i.e. due to underlying macroeconomic conditions). This is discussed briefly in Cooper’s paper. On the other hand, we also expect some amount of correlation between subsequent time steps, even if the underlying distribution does not change. It would be interesting to see how this modifies the results above.

Let me know if you find these interesting — I’ll be glad to expand on that in a future post!