Physics and Mathematics of Disordered Systems

Spatial studies

leave a comment »


√Čtude spatiale No. IV, source: [1]

Visiting an exhibition of Paul Klee recently, I was quite intrigued (among many other exciting ideas in his paintings) by the aesthetic effect of slightly disordered geometric forms… Looking at his “Etudes spatiales” (“Spatial studies” – three examples, unfortunately in rather low resolution, can be found here, one of them shown on the right), I decided to play around a bit and try to generate something similar algorithmically.

After a few hours of tinkering around in Python, here’s the result (i.e. a random sample of the result):ezgif-1-875f03f6de29.gifIt certainly lacks the stringency of the original sketches, but hey, it’s animated ūüôā

If you’re interested in the Python source, let me know!




Written by inordinatum

November 16, 2018 at 9:39 am

Posted in Uncategorized

Tagged with , , , ,

Fluctuations of the time-weighted average price

leave a comment »

Motivated by an interview question posed to a friend of mine recently, today I’d like to talk about the time-weighted average price (TWAP) of a financial asset (e.g. a stock).

Let’s consider the price of our stock a short time scale (e.g. intraday) corresponding to the interval t\in [0;1]. Its price will start at P_0 and evolve through P_t to P_1; the time-weighted average price is

\displaystyle T:=\int_0^1 P_t \, \mathrm{d}t

T defined in this way is interesting, since this is the effective price obtained when executing a large order in the market by splitting it into smaller chunks and executing them throughout the day at a constant rate. This TWAP algorithm is one of the basic algorithmic execution tools used e.g. by asset managers to minimize market impact costs. More sophisticated ones include volume-weighted average price (VWAP) where one would adjust the volume executed at each time during the day proportionally to the typical trading volume at that time.

With this motivation, let us look at the statistics of the TWAP T, and especially its fluctuations. Usually, one approximates the logarithm of the stock price by a brownian motion (corresponding to the assumption, that relative price changes are independent and identically distributed). However, on small time scales (when fluctuations are small), the compounding effect is negligible. For simplicity, I will hence assume that S_t := P_t - P_0 is a brownian motion with drift \mu and variance \sigma. This means that absolute price changes are i.i.d..

Moments of the TWAP

The Gaussian process S_t is fully determined by its first two moments,

\displaystyle \left\langle S_t \right\rangle = \mu t, \quad \left\langle (S_{t_1}-\mu t_1)(S_{t_2}-\mu t_2) \right\rangle = \sigma \, \text{min}\, t_1, t_2

From this, we obtain the first moments of the TWAP T (for simplicity I subtracted the trivial P_0 component):

\displaystyle \left\langle T \right\rangle = \int_0^1 \left\langle S_t\right\rangle\, \mathrm{d}t = \int_0^1 \mu\,t\, \mathrm{d}t = \frac{1}{2}\mu

\displaystyle \left\langle T^2 \right\rangle = \int_0^1\mathrm{d}t_1 \int_0^1\mathrm{d}t_2 \left\langle S_{t_1} S_{t_2}\right\rangle = \int_0^1  \mathrm{d}t_1 \int_0^1 \mathrm{d}t_2 \, \left(\mu^2 t_1 t_2 + \sigma\, \text{min}\, t_1, t_2\right)= \frac{\mu^2}{4} +\frac{\sigma}{3}

In other words, the variance of the TWAP relates to the variance of the intraday stock price change as

\displaystyle \frac{ \left\langle T^2 \right\rangle^c}{ \left\langle S_1^2 \right\rangle^c} = \frac{\sigma/3}{\sigma} = \frac{1}{3}

(\left\langle\,\right\rangle^c denotes connected expectation values here).

This procedure can easily be continued to calculate higher moments of the TWAP T in terms of \mu and \sigma. In fact, given that the linear combination of Gaussians is again a Gaussian, we can directly infer that the TWAP has a Gaussian distribution with mean \mu/2 and variance \sigma/3.

Distribution of the TWAP via the MSR formalism

There is another way to obtain the full distribution of the TWAP directly, via the Martin-Siggia-Rose formalism. Using the MSR approach, we know that the generating function of any linear functional of the Brownian S_t is

\displaystyle \left\langle \text{exp}\left[\int_0^{t_0}\,\lambda_t S_t\mathrm{d}t\right]\right\rangle = \exp\left[\int_0^{t_0}\left(\mu\tilde{S}_t + \sigma \tilde{S}_t^2 \right)\mathrm{d}t \right]

where \tilde{S}_t is the solution of

\displaystyle \partial_t\tilde{S}_t = \lambda_t, \displaystyle \tilde{S}_{t>t_0} = 0

Applying this to a constant \lambda_t = \lambda / t_0, we get \tilde{S_t} = \lambda(1-t/t_0)\theta(t_0-t). From this follows the generating function of the TWAP T

\displaystyle \left\langle e^{\lambda T} \right\rangle = \exp \left(\frac{\mu}{2}t_0\lambda + \frac{\sigma}{3}t_0\lambda^2 \right)

This is clearly the generating function of a Gaussian distribution with the same moments as computed above.

Written by inordinatum

June 2, 2018 at 10:50 am

Directed Polymers on hierarchical lattices – Tails of the free energy distribution

leave a comment »

A directed polymer in a random medium is a popular model for studying how an elastic interface in a random energy landscape becomes rough on large scales. On a cartesian lattice in 1+1 dimensions, the roughness exponent is known exactly (see e.g. my earlier introduction to directed polymers) and, in some cases, even the distribution of the free energy can be computed (see my earlier post on simulating the Tracy-Widom distribution). In higher dimensions, analytical results on the scaling exponents of directed polymers are scarce. However, numerical simulations indicate that non-trivial scaling exponents persist above 1+1 dimensions (see ref. [1] below and many more).

Instead of 1+d dimensional cartesian space, one can consider a directed polymer on any other lattice.¬†In particular, hierarchical lattices (as introduced e.g. in ref. [2] below) turn out to be interesting. They exhibit some of the phenomena known from directed polymers on cartesian lattices (e.g. a phase transition between a rough and a high-temperature phase for sufficiently high dimensions), and can be approached analytically. In the following, I’ll show (following ref. [2]) the exact recursion relation for the free energy distribution of a directed polymer. I’ll then deduce some exponent relations for the stretched exponential tails of the free energy.


Figure 1: Construction of a hierarchical lattice with branching ratio b

Free energy of a directed polymer on a hierarchical lattice

Consider an hierarchical lattice with branching ratio b, constructed recursively following Figure 1. At each step of the recursion, every bond of the lattice is replaced by b independent branches consisting of two independent bonds each. The branching ratio b plays a role similar to the dimension of cartesian space – as ref. [2] shows, it controls the scaling exponents and the existence of phase transitions.

Let us denote by f^{(L)} the free energy of a directed polymer spanned between the bottom and the top points of the hierarchical lattice of size L. At zero temperature, the polymer will choose the least-energy path between the bottom and the top point. Since the lattice of size 2L consists of b independent branches, f^{(2L)} is the maximum of the b branches, each consisting of two independent sub-systems of size f^{(L)}. Hence,

\displaystyle f^{(2L)} = \text{max}_{j=1...b} \left(f^{(L)}_{j,1}+f^{(L)}_{j,2}\right).

Let us now denote by F the cumulative probability distribution of f, i.e. F(x) = P(f \leq x). Then the recursion above translates into

\displaystyle F^{(2L)}(x) = \left[\int \mathrm{d}y \, F'^{(L)}(y) F^{(L)}(x-y)\right]^b.

To see this, note that \int \mathrm{d}y \, F'(y) F(x-y) is the cumulative probability distribution of f_1 + f_2 and the maximum of b independent branches is equivalent to taking the b-th power of the cumulative distribution function.

We expect that for large L, the distribution F should converge to a fixed point of the form

\displaystyle F^{(L)}(x)=F\left(\frac{x-L c}{L^\theta}\right),

where c is the typical free energy per unit length and L^\theta is the typical scale for free energy fluctuations. \theta is the scaling exponent for the anomalous free energy fluctuations which we are interested in.

Inserting this scaling form into the recursion relation for F^{(2L)}, some simple transformations yield the following nonlinear integral equation for the fixed point F:

\displaystyle F(px)=\left[\int \mathrm{d}y\, F'(y) F(x-y)\right]^b.


In contrast to the previous recursion relation, the F on both sides are now the same function (namely the fixed point distribution), and p = 2^{-\theta} is the scaling factor indicating how the free energy fluctuations increase from one iteration to the next.

Eq. (1) looks astonishingly simple, but finding its fixed points and the corresponding values of p is equivalent to solving exactly the directed polymer problem on an hierarchical lattice. In ref. [2], an expansion around the Gaussian case b=1 is performed. This turns out to be solvable, and the corresponding expansion for p and F can be obtained analytically. Other than that, not much is known on the solutions of eq. (1) – I would be very much interested in hearing of any approaches to solving or approximating it.

Stretched exponential tails

Eq. (1) puts some interesting constraints on the behaviour of the free energy distribution F(x) for very small and very large x. In analogy to the known solution for the directed polymer in 1+d cartesian dimensions, let us assume that in both of these limits the probability distribution is a stretched exponential, i.e.

\displaystyle F(x) \sim \exp \left[-c_1\,(-x)^\alpha\right] for x \to -\infty

\displaystyle 1-F(x) \sim \exp \left[-c_2\,x^\beta\right] for x \to \infty

Inserting these asymptotics into eq. (1), the right-hand side integral can be approximated by Laplace’s method (“saddle-point expansion”) and is dominated in each case by y \approx x/2. In particular, for large x also y is large, so the approximation is consistent. We obtain:

\displaystyle \exp \left[-c_1\,(-px)^\alpha\right] \sim \exp \left[-2b\,c_1\,(-x/2)^\alpha\right] for x \to -\infty

\displaystyle \exp \left[-c_2\,(px)^\beta\right] \sim \exp \left[-2c_2\,(x/2)^\beta\right] for x \to \infty

Equating the exponents, we see (as first noted in ref. [3]) that the exponents of the tails can be expressed in terms of p and b:

\displaystyle \alpha = \frac{\log 2b}{\log 2p} (tail for x \to -\infty)

\displaystyle \beta= \frac{\log 2}{\log 2p} = \frac{1}{1-\theta} (tail for x \to \infty)

As a simple check, note that these equations are fulfilled by the Gaussian limit b=1, \alpha=\beta=2, \theta=1/2.

It might be possible to extend the saddle-point expansion to higher orders, in order to obtain more precise results on the pre-exponential factors (let me know if you manage to!). Nevertheless, it does not appear to put any specific constraints on p (or, equivalently, \theta).

So, a challenge for the inclined reader remains – how is the scaling factor p in eq. (1) fixed in terms of the branching ratio b? While it is possible that determining p is just as hard as finding the actual fixed-point distribution F, I’ve still not given up hope that a simpler approach or approximation might exist…


[1] Marinari, E., Pagnani, A., Parisi, G., & R√°cz, Z. (2002). Width distributions and the upper critical dimension of Kardar-Parisi-Zhang interfaces. Physical Review E, 65(2), 026136.

[2] Derrida, B., & Griffiths, R. B. (1989). Directed polymers on disordered hierarchical lattices. EPL (Europhysics Letters), 8(2), 111.

[3] Monthus, C., & Garel, T. (2008). Disorder-dominated phases of random systems: relations between the tail exponents and scaling exponents. J. Stat Mech (2008) P01008.

Written by inordinatum

October 30, 2017 at 8:31 pm

Stochastic Logistic Growth

with one comment

A classic model for the growth of a population with a finite carrying capacity is logistic growth. It is usually formulated as a differential equation,

\displaystyle \partial_t n(t)=r\, n(t)\left[1-\frac{n(t)}{K}\right]. (1)

Here n(t) is the size of the population at time t, r is the growth rate and K is the carrying capacity.

The dynamics of eq. (1) is deterministic: The initial population n(0)=n_0 > 0 grows (or decays) towards the constant carrying capacity n(t\to\infty) = K, which is a fixed point of eq. (1). This is seen in the solid trajectories in the figure below:


Deterministic vs. stochastic logistic growth. Solid lines: deterministic model in eq. (1). Transparent lines: Realizations of stochastic logistic growth model in eq. (2) below. The colors blue, orange and green correspond to K=50, 30, 10. In all cases r=1.

To make this¬†model more realistic, let’s see how¬†we can extend it to¬†include stochastic fluctuations (semi-transparent trajectories in the figure above). I’ll look in the following at a simple stochastic logistic growth model (motivated by some discussions with a friend), where the steady state can be calculated¬†exactly. The effect¬†of stochasticity¬†on the steady state is¬†twofold:

  • The population size for long times is not fixed, but fluctuates on a scale \sigma \approx \sqrt{K} around the carrying capacity.
  • The average population size \left\langle n \right\rangle is increased above the carrying capacity K,¬†but¬†the shift goes to 0 as K increases (i.e. the deterministic model is recovered for large K).

Now let’s look at the calculation in detail…

A stochastic, individual-based logistic growth model

In eq. (1), the population is described by a real-valued function N(t). In reality, populations consist of discrete individuals and the population size¬†doesn’t change continuously. So, a more realistic approach is to describe the population dynamics¬†as a birth-death process with the following reactions:

\begin{array}{rl} A & \stackrel{r}{\longrightarrow} A+A \\ A+A & \stackrel{d}{\longrightarrow} A\end{array} (2)

In other words, we assume that during a time interval dt two events may happen:

  • Birth: With a probability¬†r \, dt, any individual may give birth to offspring, thus increasing the population size by 1.
  • Death due to competition: With a probability d\, dt,¬†out of any two individuals one may die due to the competition for common resources. Thus the population size decreases by 1. Note that d is related to the carrying capacity K in eq. (1) by d = r/K.

Note that the stochasticity in this model is not due to random external influences but due to the discreteness of the population (demographic noise).

Solution of the stochastic model

The system of reaction equations (2) translates into the following master equation for the probabilities p_n(t) of the population size at time t being equal to n:

\begin{array}{rl} \displaystyle \partial_t p_n(t) = & \displaystyle n(n+1)d\,p_{n+1}(t) - n(n-1)d\, p_n(t) \\ & \displaystyle + (n-1)r\,p_{n-1}(t) - nr\, p_n(t)\end{array} (3)

This looks daunting. However, it can be simplified a lot by introducing the generating function

\displaystyle G(\lambda,t) := \sum_{n=0}^{\infty} \lambda^n p_n(t)

After some algebra we obtain

\displaystyle \partial_t G(\lambda,t) = \lambda(1-\lambda) \left[d\, \partial_\lambda^2 G(\lambda,t) - r \,\partial_\lambda G(\lambda,t)\right].

This looks simpler than eq. (3) but still finding the full time-dependent solution G(\lambda,t) does not seem feasible. Let’s focus on the steady state¬†where \partial_t G(\lambda,t)=0. Together with the boundary conditions G(\lambda=1,t)=1 and G(\lambda=0,t)=0, we obtain the steady-state solution G_s(\lambda)

\displaystyle G_s(\lambda) = \frac{e^{K\lambda}-1}{e^{K}-1}. (4)

Here we set d=r/K to connect to the notation of eq. (1). Correspondingly, the steady-state probabilities for population size n are

\displaystyle p_n = \frac{K^n}{n!}\frac{1}{e^K-1}. (5)

Of course, these can equivalently also be obtained by solving the recursion relation (3) with \partial_t p_n(t) = 0. This result can be easily checked against simulations, see figure below.


Stationary probability distribution for the stochastic logistic growth model in eq. (2), for r=1, K=5, d=r/K=0.2. Blue line: exact solution in eq. (5). Yellow bars: Histogram over 10^6 samples.

Steady state with stochasticity

Let’s try to understand¬†what the¬†steady state of our stochastic model looks like. From eq. (4) we easily obtain the first moments of the distribution of the population size. The mean population size is:

\displaystyle \left\langle n \right\rangle = \partial_\lambda\big|_{\lambda=1}G_s(\lambda) = \frac{K}{1-e^{-K}}

So for large carrying capacities K \gg 1, we recover the same result as in the deterministic model (which is reasonable since for large population sizes demographic noise is less pronounced!). However, for smaller K fluctuations increase the average population size. E.g. for K=2, the average population size in our stochastic model is 2/\left(1-e^{-2}\right) \approx 2.31.

Now let us look at the variance,

\begin{array}{rl} \sigma^2 := &\displaystyle \left\langle n^2\right\rangle - \left\langle n \right\rangle^2\\=&\displaystyle\partial_\lambda\big|_{\lambda=1}\lambda \partial_\lambda G_s(\lambda)-\left(\frac{K}{1-e^{-K}}\right)^2\\=& \displaystyle\frac{K\left(1-e^{-K}-K\,e^{-K}\right)}{\left(1-e^{-K}\right)^2}\end{array}.

For large K \gg 1, we have \sigma^2 \approx K. So we see that stochastic fluctuations spread out the steady state to a width \sigma \approx \sqrt{K} around the carrying capacity.

Written by inordinatum

January 1, 2016 at 4:11 pm

Estimating expected growth

leave a comment »

Let’s take some fluctuating time series — say the value of some financial asset, like a stock price. What is its average growth rate? This seemingly trivial question came up in a recent discussion I had with a friend; obviously, it is relevant for quantitative finance and many other applications. Looking at it in more detail, it turns out that a precise answer is actually not that simple, and depends on the time range for which one would like to estimate the expected growth. So I’d like to share here some insights on this problem and its interesting connections to stochastic processes. Suprisingly, this was only studied in the literature quite recently!

The setup

Let’s consider the price s_t of an asset at N+1 discrete times t=1...N+1. The corresponding growth rates r_t are defined by

\displaystyle 1+r_t := \frac{s_{t+1}}{s_t}.    (1)

I.e. if s_{t+1}=1.5 s_t, the growth rate is r_t=0.5=50\%. Let’s also assume that our growth process r_t is stationary, i.e. that the distribution of the growth rates does not change in time. Say for concreteness that the growth rates r_t are i.i.d. (independent identically distributed) random variables.

The arithmetic average

The most immediate idea for computing the average growth rate \overline{r} is just to take the arithmetic average, i.e.

\displaystyle \overline{r}^{AM} := \frac{1}{N}\sum_{t=1}^N r_t.    (2)

What does this give us? Obviously, by the assumption of stationarity made above, with increasing sample size N the expectation value for the growth rate at the next time step r_{N+1} (or any other future time step) is approximated better and better by \overline{r}^{AM}:

\displaystyle \lim_{N\to\infty} E[r_{N+1}]-\frac{1}{N}\sum_{t=1}^N r_t =0.

This seems to be what we’d expect for an average growth rate, so what’s missing? Let’s go back to our sample of N asset prices and take the total growth

\displaystyle \frac{s_{N+1}}{s_1} = \prod_{i=1}^N (1+r_i).

By the relationship between the arithmetic and the geometric mean, we know that

\displaystyle \frac{s_{N+1}}{s_1} = \prod_{i=1}^N (1+r_i) \leq \left[\frac{1}{N}\sum_{i=1}^N (1+r_i)\right]^N = \left(1+\overline{r}^{AM}\right)^N. (3)

The left-hand side is the total growth after N time periods and the right-hand side is its estimate from the arithmetic mean in eq. (2). Equality only holds in eq. (3) when all the r_t are equal. When the r_t fluctuate the total growth rate will always be strictly less than the estimate from the arithmetic mean, even as N \to \infty. So, by taking the arithmetic mean to obtain the total growth in the price of our asset at the end of the observation period, we systematically overestimate it. The cause for this is the compounding performed to obtain the total growth over more than one time step. This makes our observable a nonlinear function of the growth rate in a single time step. Thus, fluctuations in the growth rate don’t average out, and yield a net shift in its expectation value.

The first explicit observation of this effect I’ve found is in a 1974 paper by M. E. Blume, Unbiased Estimators of Long-Run Expected Rates of Return. Quantitative estimates from that paper and from a later study by Jacquier, Kane and Marcus show that in typical situations this overestimation may easily be as large as 25%-100%.

The geometric average

So we see that the arithmetic mean does not provide an unbiased estimate of the compounded growth over a longer time period. Another natural way to obtain the average growth, which intuitively seems more adapted to capture that effect, is to take the geometric average of the growth rates:

\displaystyle 1+\overline{r}^{GM} := \left(\prod_{t=1}^N 1+r_t\right)^{1/N}.    (4)

Now, by construction, the total growth at the end of our observation period, i.e. after N time periods, is correctly captured:

\displaystyle \frac{s_{N+1}}{s_1} = \left(1+\overline{r}^{GM}\right)^N

But this only solves the problem which we observed after eq. (3) for this specific case, when estimating growth for a time period whose length is exactly the same as the observation time span N from which we obtain the average. For shorter time periods (in particular, when estimating the growth rate for a single time step r_t), the geometric mean will now underestimate the growth rate. On the other hand, for time periods even longer than the observation period, it will still overestimate it like the arithmetic mean (see again the paper by Blume for a more detailed discussion).

Unbiased estimators

Considering the results above, the issue becomes clearer: A reasonable (i.e. unbiased) estimate for the compounded growth over a time period requires a formula that takes into account both the number of observed time steps N, and the number of time steps over which we’d like to estimate the compounded growth T. For T=1 we can use the arithmetic mean, for T=N we can use the geometric mean, and for general T we need another estimator altogether. For the general case, Blume proposes the following (approximately) unbiased estimator \left(\overline{r}^{UB}\right)^T:

\displaystyle \overline{r}^{UB} = \frac{N-T}{N-1}\left(1+\overline{r}^{AM}\right)^T+\frac{T-1}{N-1}\left(1+\overline{r}^{GM}\right)^T.    (5)

Eq. (5) is a reasonable approximation for the compounded growth, not having any further information on the form of the distribution, correlations, etc.
For T=1, i.e. estimating growth in a single time step, this gives just the arithmetic mean \overline{r}^{AM} which is fine as we saw above. For T=N, this gives the geometric mean which is also correct. For other values of T, eq. (5) is a linear combination of the arithmetic and the geometric mean.

To see how the coefficients in eq. (5) arise, let us start with an ansatz of the form

\displaystyle \left(\overline{r}^{UB}\right)^T = \alpha\left(1+\overline{r}^{AM}\right)^T+\beta\left(1+\overline{r}^{GM}\right)^T.    (6)

Let us further split up the growth rates as r_t = \mu + \epsilon_t, where \mu is the “true” average growth rate and \epsilon are fluctuations. Inserting this as well as the definitions of \overline{r}^{AM}, \overline{r}^{GM} into eq. (6) we get

\displaystyle \left(\overline{r}^{UB}\right)^T = \alpha\left(1+\mu +\frac{1}{N}\sum_{i=1}^N\epsilon_i\right)^T+\beta\left[\prod_{i=1}^N (1+\mu+\epsilon_i)\right]^{T/N}.

Now let us assume that the fluctuations \epsilon are small, and satisfy E[\epsilon_i\epsilon_j]=\sigma^2 \delta_{ij}. Expanding to second order in \epsilon (the first order vanishes), and taking the expectation value, we obtain

\begin{array}{rl}   \displaystyle E\left[\left(\overline{r}^{UB}\right)^T\right] = & \displaystyle \alpha\left(1+\mu\right)^T\left[1+\frac{T(T-1)\sigma^2}{2(1+\mu)^2N}\right]  \\ & \displaystyle +\beta\left(1+\mu\right)^T\left[1+\frac{T(T-N)\sigma^2}{2(1+\mu)^2N}\right]  \\  & \displaystyle + \mathcal{O}(\sigma^4).  \end{array}

To obtain an estimator that is unbiased (to second order in \sigma), we now choose \alpha and \beta such that the term of order \sigma^0 is just the true growth rate (1+\mu)^T, and the term of order \sigma^2 vanishes. This gives the system

\displaystyle   \begin{array}{rl}  \displaystyle \alpha + \beta & =  1,\\  \displaystyle \frac{T-1}{2}\alpha+\frac{T-N}{2}\beta & = 0.  \end{array}

The solution of this linear system for \alpha and \beta yields exactly the coefficients in eq. (5).
Of course, here we make the assumption of small fluctuations and also a specific ansatz for the estimator. If one has more information on the distribution of the growth rates r this may not be the most adequate one, but with what we know there’s not much more we can do!


As you can see from the above discussion, estimating the expected (compounded) growth over a series of time steps is more complex than it appears at first sight. I’ve shown some basic results, but didn’t touch on many other important aspects:

  • In addition to the growth rate r, it is also interesting to consider the discount factor r^{-T}. Blume’s approach is extended to this observable in this paper by Ian Cooper.
  • If one assumes the growth factors 1+r in eq. (1) to be log-normally distributed, the problem can be treated analytically. Jacquier, Kane and Marcus discuss this case in detail in this paper, and also provide an exact result for the unbiased estimator.
  • The assumption of independent, identically distributed growth rates is not very realistic. On the one hand, we expect the distribution from which the annual returns are drawn to vary in time (i.e. due to underlying macroeconomic conditions). This is discussed briefly in Cooper’s paper. On the other hand, we also expect some amount of correlation between subsequent time steps, even if the underlying distribution does not change. It would be interesting to see how this modifies the results above.

Let me know if you find these interesting — I’ll be glad to expand on that in a future post!

Written by inordinatum

January 18, 2015 at 4:18 pm

Extreme avalanches in the ABBM model

leave a comment »

Extreme value statistics is becoming a popular and well-studied field. Just like the sums of random variables exhibit universality described by the central limit theorem, maxima of random variables also obey a universal distribution, the generalized extreme-value distribution. This is interesting for studying e.g. extreme events in finance (see this paper by McNeil and Frey) and climate statistics (see e.g. this physics paper in Science and this climatology paper).

Having refereed a related paper recently, I’d like to share some insights on the statistics of extreme avalanches in disordered systems. An example are particularly large jumps of the fracture front when slowly breaking a disordered solid. A simple but reasonable model for such avalanches is the Alessandro-Beatrice-Bertotti-Montorsi (ABBM) model, originally invented for describing Barkhausen noise. I already touched upon it in a previous blog post, and will focus on it again in the following.
I’ll show an exact formula for the distribution of the maximum avalanche size in an interval, and connect this result to the universal extreme value distribution when considering a large number of avalanches.

Brief review: The ABBM model

To briefly recap (see my previous post for details), the ABBM model consists in a particle at position u(t) pulled on a random landscape by a spring. A key assumption of the model is that the force resulting from the disordered potential is a Brownian Motion in u. This allows computing many observables exactly.
For example, when the force due to the spring increases by f, it is well-known (see e.g. arxiv:0808.3217 and the citations therein) that the resulting displacement S of the particle follows the distribition

\displaystyle P_f(S) = \frac{f}{2\sqrt{\pi} S^{3/2}}\exp\left[-\frac{(kS-f)^2}{4S}\right].    (1)

Here, k is the spring constant. For simplicity I took a Brownian random force landscape with variance \sigma=1 here, but the results are straightforward to generalize. This result is basically the distribution of the first-passage time of a Brownian motion with drift k at a given level f. In this context it is also known as the Bachelier-Levy formula (see also my post on first-passage times).
For small forces, f \to 0, and weak springs, k \to 0, (1) becomes a power-law distribution P(S) \sim S^{-3/2}.

The largest avalanche in the ABBM model

Now let us consider particularly large avalanches. When applying a force f, the probability to have a total displacement S \leq S_0 is

\displaystyle F^{tot}_f(S_0) := \int_0^{S_0}\mathrm{d}S\,P_f(S).    (2)

Note, however, that the total displacement in this case is the sum of many avalanches triggered by infinitesimal steps in f. So how do we obtain the probability F_f(S_0) that all avalanches triggered during this ramp-up of the force are smaller than S_0? We decompose it in n intervals of size f/n, and let n \to \infty:

\displaystyle F_f(S_0) = \lim_{n\to\infty} \left[F^{tot}_{f/n}(S_0)\right]^n.    (3)

Note that we use here the Markovian property of the Brownian Motion, which ensures that avalanches on disjoint intervals are independent. Only this property permits us to take the n-th power to obtain the cumulative distribution for the n intervals; on any non-Markovian landscape the avalanches in these intervals would be correlated and things would be much more complicated.

Combining equations (1), (2) and (3) we can compute F_f(S_0) explicitly:

\begin{array}{rl}  F_f(S_0) =& \lim_{n\to\infty} \left[F^{tot}_{f/n}(S_0)\right]^n = \exp\left[-f\partial_f\big|_{f=0}\int_{S_0}^{\infty}\mathrm{d}S\,P_f(S)\right] \\  =& \displaystyle \exp\left[-f\left(\frac{e^{-k^2 S_0/4}}{\sqrt{\pi S_0}} - \frac{k}{2}\text{erfc} \frac{k\sqrt{S_0}}{2}\right)\right].  \end{array}    (4)

This satisfies the normalization expected of a cumulative distribution function: For S_0 \to 0, F_f(S_0)\to 0 and for S_0 \to \infty, F_f(S_0)\to 1.
The probability distribution of the maximal avalanche size S_{max} is correspondingly

\displaystyle P_f(S_{max}) = \partial_{S_0}\big|_{S_0=S_{max}}F_f(S_0).    (5)

Eqs. (4) and (5) are a nice closed-form expression for the size distribution of the largest avalanche during the force increase by f!

From few to many avalanches

As one goes to infinitesimal force steps f \to 0, only a single avalanche is triggered. Then it is clear from our construction and eqs. (4), (5) that P_f(S_{max}) \to P_f(S) as defined in (1). So, as expected, when considering a single avalanche the maximal avalanche size and the total avalanche size coincide.

On the other hand, for large force steps f \to \infty, the ramp-up of the force triggers many independent avalanches. The total displacement S is then the sum of many independent avalanche sizes S_1...S_n. Thus, by the central limit theorem, one expects to find a Gaussian distribution for S. We can see this explicitly from (1): The expectation value is \left\langle S \right\rangle = f/k, and the fluctuations around it are \delta S := S-\left\langle S \right\rangle. From (1) one finds that they scale like \delta S \sim \sqrt{f/k^3}. The normalized fluctuations d := \delta S \sqrt{k^3/f} have the distribution

\displaystyle P(d) = \frac{1}{2\sqrt{\pi}}\exp\left(-\frac{d^2}{4}\right) + \mathcal{O}(f^{-1/2}).     (6)

For large f, we are indeed left with a Gaussian distribution for the normalized fluctuations d. This is easily checked numerically, see figure 1 below.

Distribution of the total displacement for increasing force steps.

Figure 1: Distribution of the total displacement S for increasing force steps f, as given in eq. (1). Observe how the distribution converges to the Gaussian limit eq. (6) indicated by black dotted lines.

So what happens with the maximal avalanche size S_{max} for large steps f \to \infty? S_{max} is now the maximum of many independent avalanche sizes S_1...S_n, and as mentioned in the introduction we expect its distribution to be a universal extreme value distribution.
Since only large S_0 are relevant in (4), the exponent can be approximated by

\displaystyle \frac{e^{-k^2 S_0/4}}{\sqrt{\pi S_0}} - \frac{k}{2}\text{erfc} \frac{k\sqrt{S_0}}{2} \approx \frac{4}{k^2 \sqrt{\pi} S_0^{3/2}}e^{-k^2 S_0/4}.    (7)

Inserting this back into (4), we see that for large f the distribution P_f(S_{max}) is centered around S_f := \frac{4}{k^2} \log f. The cumulative distribution function (4) is well approximated by

\displaystyle F_f(S_{max}) \approx \exp\left[- \frac{2}{k^2 \sqrt{\pi} S_f^{3/2}}e^{-k^2 (S_{max}-S_f)/4} \right].    (8)

This is, up to rescaling, the cumulative distribution function of the Gumbel extreme-value distribution e^{-e^{-x}}. It is easy to check this universal asymptotic form numerically, see figure 2 below. Note that here the convergence here is much slower than for the total displacement shown in figure 1, since the typical scale for S_{max} only grows logarithmically with f.

Distribution of the size of the largest avalanche

Figure 2: Distribution of the size of the largest avalanche S_{max} for force steps of increasing size f, as given by eq. (5). Observe how the distribution converges to the universal Gumbel limit in eq. (8) indicated by black dotted lines.

For some applications, the limit of a very soft spring, k \to 0, is important. I leave the details to the reader but the main picture is that the exponential decay for large S_0 in eq. (6) is replaced by a power law S_0^{-1/2}. Correspondingly, the universal extreme-value distribution observed for large force steps f is no longer the Gumbel distribution (8) but instead a Fréchet distribution.

Side note: The minimal avalanche size

One may be tempted to approach similarly the problem of the minimal avalanche size for a slow ramp-up of the applied force. However, this is not well-defined: Due to the roughness of the Brownian force landscape, as we increase the force more and more slowly, the size of the smallest avalanche decreases more and more. Hence, its distribution will always be discretization-dependent and will not yield a finite result such as eq. (4) in the continuum limit.

All this gives a consistent picture of the maximal avalanche in the ABBM model. I find it really nice that it is so simple, knowing the avalanche size distribution (1), to express the distribution of the size of the largest avalanche in closed form and understand how it behaves!

Written by inordinatum

August 19, 2014 at 9:53 pm

Simulating Directed Polymers and the Tracy-Widom Distribution

with 4 comments

A few posts ago I briefly talked about the problem of a directed polymer in a random medium. In that post, I discussed the scaling behaviour of the polymer fluctuations due to the disordered environment, i.e. its roughness exponent, in 2 dimensions.

As with many other statistical physics problems, for gaining an intuitive understanding it is also useful to perform some numerical simulations. In this post I will show how the directed polymer partition sum can be simulated using Wolfram Mathematica. I will also compare the results to the known exact result for the distribution of the free energy, the Tracy-Widom distribution.

The transfer-matrix algorithm

The partition sum Z of the directed polymer is defined as the sum over all directed paths \Gamma of the Boltzmann weights e^{-\beta E_\Gamma} of each path. The energy E_\Gamma of the path \Gamma is the sum over the random energies E_i of the sites i which are touched by \Gamma. \beta = \frac{1}{T} is the inverse temperature.

In any space dimension, this partition sum can be evaluated rapidly by using a transfer-matrix algorithm. Let us order the lattice along the longitudinal direction of the directed polymer t. Then, the partition sum at site x, t is obtained from the partition sums at this site’s neighbours y\wedge x in the preceding slice t-1:

\displaystyle Z(x,t) = e^{-\beta E_{x,t}}\sum_{y \wedge x} Z(y,t-1).

For the free energy F(x,t) := -\frac{1}{\beta}\ln Z(x,t), we obtain

\displaystyle F(x,t) = E_{x,t} - \frac{1}{\beta} \log \sum_{y \wedge x} e^{-\beta F(y,t-1)}.

Let us focus on the case of very low temperatures, \beta \to \infty. Then the logarithm is dominated by the neighbour y with the minimal (i.e. most negative) free energy F(y,t-1). Hence we obtain

\displaystyle F(x,t) = E_{x,t} + \min_{y \wedge x} F(y,t-1).

This also makes sense intuitively: At zero temperature, the partition sum of the directed polymer is dominated by the minimal-energy configuration. The “cost” of getting to site x is the cheapest “cost” of getting to one of its neighbours, augmented by the (fixed) cost E(x,t) of going to site x.

The Mathematica simulation code

The following short Mathematica routine returns the free energy at the final time t for a directed polymer

fe = 
  Compile[{{wd, _Integer}, {len, _Integer}}, 
   Module[{pot1, pot2, fval}, 
    pot1 = RandomVariate[NormalDistribution[], {len, wd}];
    pot2 = RandomVariate[NormalDistribution[], {len, wd}];
    fval = ConstantArray[0., wd];
    Do[fval = (fval - (fval - RotateLeft[fval]) UnitStep[
           fval - RotateLeft[fval]]) + pot1[[k]]; 
     fval = (fval - (fval - RotateRight[fval]) UnitStep[
           fval - RotateRight[fval]]) + pot2[[k]];, {k, 1, len}]; 

k is the longitudinal coordinate of the polymer, and runs from 1...len. The vector fval has wd components and contains the free energy values at the current value of k, for all transversal coordinates x=1...wd.
For each k I perform two steps forward, one where site i neighbours on i+1 and i, and one where site i neighbours on i and i-1. This ensures that in the end one obtains a standard square lattice of size 2len\timeswd, where the directed polymer can choose between two symmetric alternatives at each time step. pot1 and pot2 are len\timeswd matrices containing the random energies for each site, for the even and odd values of k, respectively.

Since Mathematica‘s built-in Min function does not operate componentwise on vectors, I emulate it by using the componentwise UnitStep function:

a-(a-b)UnitStep[a-b] = Min[a,b].

At t=0, I use a free initial condition: F(x,t=0)=0 \Leftrightarrow Z(x,t=0)=1, i.e. the polymer end at t=0 can be anywhere. Modifying the line

fval = ConstantArray[0., wd];,

one can change this e.g. to a fixed initial condition where the polymer end at t=0 is fixed.
The returned value is the vector F(x,t) where x=1...wd and t=2len is the final time.

For example, to generate 10^4 samples of the free energy for a directed polymer on a 256\times 256 lattice, with the final end fixed, and the initial end free, you would call

Table[fe[256,128][[1]], {10000}]

Note that the [[1]] picks out the free energy for a fixed final site. This simulation takes about a minute on my laptop.

The Tracy-Widom distribution of the directed polymer free energy

Theory predicts the following form for the free energy F of the directed polymer model in 2 dimensions:

\displaystyle F(t) = ct - A t^{1/3}\chi + ... ,

where ... indicates terms subdominant in t. c is a constant and c t is the nonuniversal, deterministic part of the free energy. The fluctuations of the free energy due to randomness of the disorder, described by the second term, are universal: The random variable \chi is distributed according to a Tracy-Widom distribution, and only its amplitude A depends on discretisation details.

The precise distribution depends on the symmetries imposed by the initial condition.

  • For a polymer free at t=0, P(\chi \leq s) = F_2(s), where F_2(s) is the Tracy-Widom distribution for the Gaussian Unitary Ensemble (GUE).
  • For a polymer fixed at t=0, P(\chi \leq s) = F_1(s), where F_1(s) is the Tracy-Widom distribution for the Gaussian Orthogonal Ensemble (GOE).

The functions F_1 and F_2 are solutions of Painlevé differential equations. Tables of their numerical values can be found with very high precision e.g. on the web page of M. Prähofer.

Tracy-Widom distribution and directed polymer free energy

Tracy-Widom distribution and directed polymer free energy.

Blue line: Tracy-Widom distribution F_1 of the GOE ensemble.
Red line: Tracy-Widom distribution F_2 of the GUE ensemble.
Yellow line: Gaussian distribution.
Histogram: 10^5 simulations of a 256\times 256 directed polymer with free (blue) and fixed (red) initial condition, respectively.

All distributions are normalized so that mean=0 and variance=1.

In the figure on the right you can see how this theoretical prediction compares to numerical simulations using the code above. To get rid of the nonuniversal constants c and A, I consider the “normalized” free energy

f_n := \left(F-\left\langle F \right\rangle\right)/\sqrt{\left\langle F^2 \right\rangle -\left\langle F \right\rangle^2}.

We see that its distribution observed in directed polymer simulations has a skewed shape, which agrees very well with the Tracy-Widom prediction (clearly much better than with the symmetric Gaussian distribution). In the tails, one can even “almost” distinguish the two universality classes corresponding to different initial conditions. For the difference to be really significant, one would need to run some longer simulations though.

Further optimizations

Naturally one would like to make the simulation even faster, in order to have better statistics and reach higher system sizes. One may think that implementing the code above in a low-level programming language (e.g. C++) would yield a large performance gain. However, it turns out that a naive C++ implementation (where the operations on the free-energy vector are implemented by a simple loop over the transversal coordinate x) is actually about three times slower than the Mathematica code above. This is since such array-wise operations can be optimized by using modern processor SIMD extensions, which are utilized in Mathematica‘s routines.

Of course, one can also take advantage of this in C++. One can either hand-optimize the loop over transversal coordinates using the processor-dependent SIMD routines, or use an array math library. For example, I implemented the transfer-matrix algorithm above in C++ using the vector routines from Intel’s Math Kernel Library, and was able to achieve a speed-up by a factor of five compared to the Mathematica code. Still, considering the ease of programming and debugging, I find that the simple Mathematica implementation fares amazingly well!

If you’re interested in more details, references, higher-dimensional versions, the C++ code, or anything related, go ahead and drop me a comment below — I’ll extend this post if there is interest ūüėČ

Written by inordinatum

January 22, 2014 at 11:27 pm