inordinatum

Physics and Mathematics of Disordered Systems

Stochastic Logistic Growth

A classic model for the growth of a population with a finite carrying capacity is logistic growth. It is usually formulated as a differential equation,

$\displaystyle \partial_t n(t)=r\, n(t)\left[1-\frac{n(t)}{K}\right]$. (1)

Here $n(t)$ is the size of the population at time $t$, $r$ is the growth rate and $K$ is the carrying capacity.

The dynamics of eq. (1) is deterministic: The initial population $n(0)=n_0 > 0$ grows (or decays) towards the constant carrying capacity $n(t\to\infty) = K$, which is a fixed point of eq. (1). This is seen in the solid trajectories in the figure below:

Deterministic vs. stochastic logistic growth. Solid lines: deterministic model in eq. (1). Transparent lines: Realizations of stochastic logistic growth model in eq. (2) below. The colors blue, orange and green correspond to $K=50, 30, 10$. In all cases $r=1$.

To make this model more realistic, let’s see how we can extend it to include stochastic fluctuations (semi-transparent trajectories in the figure above). I’ll look in the following at a simple stochastic logistic growth model (motivated by some discussions with a friend), where the steady state can be calculated exactly. The effect of stochasticity on the steady state is twofold:

• The population size for long times is not fixed, but fluctuates on a scale $\sigma \approx \sqrt{K}$ around the carrying capacity.
• The average population size $\left\langle n \right\rangle$ is increased above the carrying capacity $K$, but the shift goes to 0 as $K$ increases (i.e. the deterministic model is recovered for large $K$).

Now let’s look at the calculation in detail…

A stochastic, individual-based logistic growth model

In eq. (1), the population is described by a real-valued function $N(t)$. In reality, populations consist of discrete individuals and the population size doesn’t change continuously. So, a more realistic approach is to describe the population dynamics as a birth-death process with the following reactions:

$\begin{array}{rl} A & \stackrel{r}{\longrightarrow} A+A \\ A+A & \stackrel{d}{\longrightarrow} A\end{array}$ (2)

In other words, we assume that during a time interval $dt$ two events may happen:

• Birth: With a probability $r \, dt$, any individual may give birth to offspring, thus increasing the population size by 1.
• Death due to competition: With a probability $d\, dt$, out of any two individuals one may die due to the competition for common resources. Thus the population size decreases by 1. Note that $d$ is related to the carrying capacity $K$ in eq. (1) by $d = r/K$.

Note that the stochasticity in this model is not due to random external influences but due to the discreteness of the population (demographic noise).

Solution of the stochastic model

The system of reaction equations (2) translates into the following master equation for the probabilities $p_n(t)$ of the population size at time $t$ being equal to $n$:

$\begin{array}{rl} \displaystyle \partial_t p_n(t) = & \displaystyle n(n+1)d\,p_{n+1}(t) - n(n-1)d\, p_n(t) \\ & \displaystyle + (n-1)r\,p_{n-1}(t) - nr\, p_n(t)\end{array}$ (3)

This looks daunting. However, it can be simplified a lot by introducing the generating function

$\displaystyle G(\lambda,t) := \sum_{n=0}^{\infty} \lambda^n p_n(t)$

After some algebra we obtain

$\displaystyle \partial_t G(\lambda,t) = \lambda(1-\lambda) \left[d\, \partial_\lambda^2 G(\lambda,t) - r \,\partial_\lambda G(\lambda,t)\right]$.

This looks simpler than eq. (3) but still finding the full time-dependent solution $G(\lambda,t)$ does not seem feasible. Let’s focus on the steady state where $\partial_t G(\lambda,t)=0$. Together with the boundary conditions $G(\lambda=1,t)=1$ and $G(\lambda=0,t)=0$, we obtain the steady-state solution $G_s(\lambda)$

$\displaystyle G_s(\lambda) = \frac{e^{K\lambda}-1}{e^{K}-1}$. (4)

Here we set $d=r/K$ to connect to the notation of eq. (1). Correspondingly, the steady-state probabilities for population size $n$ are

$\displaystyle p_n = \frac{K^n}{n!}\frac{1}{e^K-1}$. (5)

Of course, these can equivalently also be obtained by solving the recursion relation (3) with $\partial_t p_n(t) = 0$. This result can be easily checked against simulations, see figure below.

Stationary probability distribution for the stochastic logistic growth model in eq. (2), for $r=1, K=5, d=r/K=0.2$. Blue line: exact solution in eq. (5). Yellow bars: Histogram over $10^6$ samples.

Let’s try to understand what the steady state of our stochastic model looks like. From eq. (4) we easily obtain the first moments of the distribution of the population size. The mean population size is:

$\displaystyle \left\langle n \right\rangle = \partial_\lambda\big|_{\lambda=1}G_s(\lambda) = \frac{K}{1-e^{-K}}$

So for large carrying capacities $K \gg 1$, we recover the same result as in the deterministic model (which is reasonable since for large population sizes demographic noise is less pronounced!). However, for smaller $K$ fluctuations increase the average population size. E.g. for $K=2$, the average population size in our stochastic model is $2/\left(1-e^{-2}\right) \approx 2.31$.

Now let us look at the variance,

$\begin{array}{rl} \sigma^2 := &\displaystyle \left\langle n^2\right\rangle - \left\langle n \right\rangle^2\\=&\displaystyle\partial_\lambda\big|_{\lambda=1}\lambda \partial_\lambda G_s(\lambda)-\left(\frac{K}{1-e^{-K}}\right)^2\\=& \displaystyle\frac{K\left(1-e^{-K}-K\,e^{-K}\right)}{\left(1-e^{-K}\right)^2}\end{array}$.

For large $K \gg 1$, we have $\sigma^2 \approx K$. So we see that stochastic fluctuations spread out the steady state to a width $\sigma \approx \sqrt{K}$ around the carrying capacity.

Written by inordinatum

January 1, 2016 at 4:11 pm

Estimating expected growth

Let’s take some fluctuating time series — say the value of some financial asset, like a stock price. What is its average growth rate? This seemingly trivial question came up in a recent discussion I had with a friend; obviously, it is relevant for quantitative finance and many other applications. Looking at it in more detail, it turns out that a precise answer is actually not that simple, and depends on the time range for which one would like to estimate the expected growth. So I’d like to share here some insights on this problem and its interesting connections to stochastic processes. Suprisingly, this was only studied in the literature quite recently!

The setup

Let’s consider the price $s_t$ of an asset at $N+1$ discrete times $t=1...N+1$. The corresponding growth rates $r_t$ are defined by

$\displaystyle 1+r_t := \frac{s_{t+1}}{s_t}.$    (1)

I.e. if $s_{t+1}=1.5 s_t$, the growth rate is $r_t=0.5=50\%$. Let’s also assume that our growth process $r_t$ is stationary, i.e. that the distribution of the growth rates does not change in time. Say for concreteness that the growth rates $r_t$ are i.i.d. (independent identically distributed) random variables.

The arithmetic average

The most immediate idea for computing the average growth rate $\overline{r}$ is just to take the arithmetic average, i.e.

$\displaystyle \overline{r}^{AM} := \frac{1}{N}\sum_{t=1}^N r_t.$    (2)

What does this give us? Obviously, by the assumption of stationarity made above, with increasing sample size $N$ the expectation value for the growth rate at the next time step $r_{N+1}$ (or any other future time step) is approximated better and better by $\overline{r}^{AM}$:

$\displaystyle \lim_{N\to\infty} E[r_{N+1}]-\frac{1}{N}\sum_{t=1}^N r_t =0.$

This seems to be what we’d expect for an average growth rate, so what’s missing? Let’s go back to our sample of $N$ asset prices and take the total growth

$\displaystyle \frac{s_{N+1}}{s_1} = \prod_{i=1}^N (1+r_i).$

By the relationship between the arithmetic and the geometric mean, we know that

$\displaystyle \frac{s_{N+1}}{s_1} = \prod_{i=1}^N (1+r_i) \leq \left[\frac{1}{N}\sum_{i=1}^N (1+r_i)\right]^N = \left(1+\overline{r}^{AM}\right)^N.$ (3)

The left-hand side is the total growth after $N$ time periods and the right-hand side is its estimate from the arithmetic mean in eq. (2). Equality only holds in eq. (3) when all the $r_t$ are equal. When the $r_t$ fluctuate the total growth rate will always be strictly less than the estimate from the arithmetic mean, even as $N \to \infty$. So, by taking the arithmetic mean to obtain the total growth in the price of our asset at the end of the observation period, we systematically overestimate it. The cause for this is the compounding performed to obtain the total growth over more than one time step. This makes our observable a nonlinear function of the growth rate in a single time step. Thus, fluctuations in the growth rate don’t average out, and yield a net shift in its expectation value.

The first explicit observation of this effect I’ve found is in a 1974 paper by M. E. Blume, Unbiased Estimators of Long-Run Expected Rates of Return. Quantitative estimates from that paper and from a later study by Jacquier, Kane and Marcus show that in typical situations this overestimation may easily be as large as 25%-100%.

The geometric average

So we see that the arithmetic mean does not provide an unbiased estimate of the compounded growth over a longer time period. Another natural way to obtain the average growth, which intuitively seems more adapted to capture that effect, is to take the geometric average of the growth rates:

$\displaystyle 1+\overline{r}^{GM} := \left(\prod_{t=1}^N 1+r_t\right)^{1/N}.$    (4)

Now, by construction, the total growth at the end of our observation period, i.e. after $N$ time periods, is correctly captured:

$\displaystyle \frac{s_{N+1}}{s_1} = \left(1+\overline{r}^{GM}\right)^N$

But this only solves the problem which we observed after eq. (3) for this specific case, when estimating growth for a time period whose length is exactly the same as the observation time span $N$ from which we obtain the average. For shorter time periods (in particular, when estimating the growth rate for a single time step $r_t$), the geometric mean will now underestimate the growth rate. On the other hand, for time periods even longer than the observation period, it will still overestimate it like the arithmetic mean (see again the paper by Blume for a more detailed discussion).

Unbiased estimators

Considering the results above, the issue becomes clearer: A reasonable (i.e. unbiased) estimate for the compounded growth over a time period requires a formula that takes into account both the number of observed time steps $N$, and the number of time steps over which we’d like to estimate the compounded growth $T$. For $T=1$ we can use the arithmetic mean, for $T=N$ we can use the geometric mean, and for general $T$ we need another estimator altogether. For the general case, Blume proposes the following (approximately) unbiased estimator $\left(\overline{r}^{UB}\right)^T$:

$\displaystyle \overline{r}^{UB} = \frac{N-T}{N-1}\left(1+\overline{r}^{AM}\right)^T+\frac{T-1}{N-1}\left(1+\overline{r}^{GM}\right)^T.$    (5)

Eq. (5) is a reasonable approximation for the compounded growth, not having any further information on the form of the distribution, correlations, etc.
For $T=1$, i.e. estimating growth in a single time step, this gives just the arithmetic mean $\overline{r}^{AM}$ which is fine as we saw above. For $T=N$, this gives the geometric mean which is also correct. For other values of $T$, eq. (5) is a linear combination of the arithmetic and the geometric mean.

To see how the coefficients in eq. (5) arise, let us start with an ansatz of the form

$\displaystyle \left(\overline{r}^{UB}\right)^T = \alpha\left(1+\overline{r}^{AM}\right)^T+\beta\left(1+\overline{r}^{GM}\right)^T.$    (6)

Let us further split up the growth rates as $r_t = \mu + \epsilon_t$, where $\mu$ is the “true” average growth rate and $\epsilon$ are fluctuations. Inserting this as well as the definitions of $\overline{r}^{AM}, \overline{r}^{GM}$ into eq. (6) we get

$\displaystyle \left(\overline{r}^{UB}\right)^T = \alpha\left(1+\mu +\frac{1}{N}\sum_{i=1}^N\epsilon_i\right)^T+\beta\left[\prod_{i=1}^N (1+\mu+\epsilon_i)\right]^{T/N}.$

Now let us assume that the fluctuations $\epsilon$ are small, and satisfy $E[\epsilon_i\epsilon_j]=\sigma^2 \delta_{ij}$. Expanding to second order in $\epsilon$ (the first order vanishes), and taking the expectation value, we obtain

$\begin{array}{rl} \displaystyle E\left[\left(\overline{r}^{UB}\right)^T\right] = & \displaystyle \alpha\left(1+\mu\right)^T\left[1+\frac{T(T-1)\sigma^2}{2(1+\mu)^2N}\right] \\ & \displaystyle +\beta\left(1+\mu\right)^T\left[1+\frac{T(T-N)\sigma^2}{2(1+\mu)^2N}\right] \\ & \displaystyle + \mathcal{O}(\sigma^4). \end{array}$

To obtain an estimator that is unbiased (to second order in $\sigma$), we now choose $\alpha$ and $\beta$ such that the term of order $\sigma^0$ is just the true growth rate $(1+\mu)^T$, and the term of order $\sigma^2$ vanishes. This gives the system

$\displaystyle \begin{array}{rl} \displaystyle \alpha + \beta & = 1,\\ \displaystyle \frac{T-1}{2}\alpha+\frac{T-N}{2}\beta & = 0. \end{array}$

The solution of this linear system for $\alpha$ and $\beta$ yields exactly the coefficients in eq. (5).
Of course, here we make the assumption of small fluctuations and also a specific ansatz for the estimator. If one has more information on the distribution of the growth rates $r$ this may not be the most adequate one, but with what we know there’s not much more we can do!

Outlook

As you can see from the above discussion, estimating the expected (compounded) growth over a series of time steps is more complex than it appears at first sight. I’ve shown some basic results, but didn’t touch on many other important aspects:

• In addition to the growth rate $r$, it is also interesting to consider the discount factor $r^{-T}$. Blume’s approach is extended to this observable in this paper by Ian Cooper.
• If one assumes the growth factors $1+r$ in eq. (1) to be log-normally distributed, the problem can be treated analytically. Jacquier, Kane and Marcus discuss this case in detail in this paper, and also provide an exact result for the unbiased estimator.
• The assumption of independent, identically distributed growth rates is not very realistic. On the one hand, we expect the distribution from which the annual returns are drawn to vary in time (i.e. due to underlying macroeconomic conditions). This is discussed briefly in Cooper’s paper. On the other hand, we also expect some amount of correlation between subsequent time steps, even if the underlying distribution does not change. It would be interesting to see how this modifies the results above.

Let me know if you find these interesting — I’ll be glad to expand on that in a future post!

Written by inordinatum

January 18, 2015 at 4:18 pm

Extreme avalanches in the ABBM model

Extreme value statistics is becoming a popular and well-studied field. Just like the sums of random variables exhibit universality described by the central limit theorem, maxima of random variables also obey a universal distribution, the generalized extreme-value distribution. This is interesting for studying e.g. extreme events in finance (see this paper by McNeil and Frey) and climate statistics (see e.g. this physics paper in Science and this climatology paper).

Having refereed a related paper recently, I’d like to share some insights on the statistics of extreme avalanches in disordered systems. An example are particularly large jumps of the fracture front when slowly breaking a disordered solid. A simple but reasonable model for such avalanches is the Alessandro-Beatrice-Bertotti-Montorsi (ABBM) model, originally invented for describing Barkhausen noise. I already touched upon it in a previous blog post, and will focus on it again in the following.
I’ll show an exact formula for the distribution of the maximum avalanche size in an interval, and connect this result to the universal extreme value distribution when considering a large number of avalanches.

Brief review: The ABBM model

To briefly recap (see my previous post for details), the ABBM model consists in a particle at position $u(t)$ pulled on a random landscape by a spring. A key assumption of the model is that the force resulting from the disordered potential is a Brownian Motion in $u$. This allows computing many observables exactly.
For example, when the force due to the spring increases by $f$, it is well-known (see e.g. arxiv:0808.3217 and the citations therein) that the resulting displacement $S$ of the particle follows the distribition

$\displaystyle P_f(S) = \frac{f}{2\sqrt{\pi} S^{3/2}}\exp\left[-\frac{(kS-f)^2}{4S}\right].$    (1)

Here, $k$ is the spring constant. For simplicity I took a Brownian random force landscape with variance $\sigma=1$ here, but the results are straightforward to generalize. This result is basically the distribution of the first-passage time of a Brownian motion with drift $k$ at a given level $f$. In this context it is also known as the Bachelier-Levy formula (see also my post on first-passage times).
For small forces, $f \to 0$, and weak springs, $k \to 0$, (1) becomes a power-law distribution $P(S) \sim S^{-3/2}$.

The largest avalanche in the ABBM model

Now let us consider particularly large avalanches. When applying a force $f$, the probability to have a total displacement $S \leq S_0$ is

$\displaystyle F^{tot}_f(S_0) := \int_0^{S_0}\mathrm{d}S\,P_f(S).$    (2)

Note, however, that the total displacement in this case is the sum of many avalanches triggered by infinitesimal steps in $f$. So how do we obtain the probability $F_f(S_0)$ that all avalanches triggered during this ramp-up of the force are smaller than $S_0$? We decompose it in $n$ intervals of size $f/n$, and let $n \to \infty$:

$\displaystyle F_f(S_0) = \lim_{n\to\infty} \left[F^{tot}_{f/n}(S_0)\right]^n.$    (3)

Note that we use here the Markovian property of the Brownian Motion, which ensures that avalanches on disjoint intervals are independent. Only this property permits us to take the $n$-th power to obtain the cumulative distribution for the $n$ intervals; on any non-Markovian landscape the avalanches in these intervals would be correlated and things would be much more complicated.

Combining equations (1), (2) and (3) we can compute $F_f(S_0)$ explicitly:

$\begin{array}{rl} F_f(S_0) =& \lim_{n\to\infty} \left[F^{tot}_{f/n}(S_0)\right]^n = \exp\left[-f\partial_f\big|_{f=0}\int_{S_0}^{\infty}\mathrm{d}S\,P_f(S)\right] \\ =& \displaystyle \exp\left[-f\left(\frac{e^{-k^2 S_0/4}}{\sqrt{\pi S_0}} - \frac{k}{2}\text{erfc} \frac{k\sqrt{S_0}}{2}\right)\right]. \end{array}$    (4)

This satisfies the normalization expected of a cumulative distribution function: For $S_0 \to 0$, $F_f(S_0)\to 0$ and for $S_0 \to \infty$, $F_f(S_0)\to 1$.
The probability distribution of the maximal avalanche size $S_{max}$ is correspondingly

$\displaystyle P_f(S_{max}) = \partial_{S_0}\big|_{S_0=S_{max}}F_f(S_0).$    (5)

Eqs. (4) and (5) are a nice closed-form expression for the size distribution of the largest avalanche during the force increase by $f$!

From few to many avalanches

As one goes to infinitesimal force steps $f \to 0$, only a single avalanche is triggered. Then it is clear from our construction and eqs. (4), (5) that $P_f(S_{max}) \to P_f(S)$ as defined in (1). So, as expected, when considering a single avalanche the maximal avalanche size and the total avalanche size coincide.

On the other hand, for large force steps $f \to \infty$, the ramp-up of the force triggers many independent avalanches. The total displacement $S$ is then the sum of many independent avalanche sizes $S_1...S_n$. Thus, by the central limit theorem, one expects to find a Gaussian distribution for $S$. We can see this explicitly from (1): The expectation value is $\left\langle S \right\rangle = f/k$, and the fluctuations around it are $\delta S := S-\left\langle S \right\rangle$. From (1) one finds that they scale like $\delta S \sim \sqrt{f/k^3}$. The normalized fluctuations $d := \delta S \sqrt{k^3/f}$ have the distribution

$\displaystyle P(d) = \frac{1}{2\sqrt{\pi}}\exp\left(-\frac{d^2}{4}\right) + \mathcal{O}(f^{-1/2}).$     (6)

For large $f$, we are indeed left with a Gaussian distribution for the normalized fluctuations $d$. This is easily checked numerically, see figure 1 below.

Figure 1: Distribution of the total displacement $S$ for increasing force steps $f$, as given in eq. (1). Observe how the distribution converges to the Gaussian limit eq. (6) indicated by black dotted lines.

So what happens with the maximal avalanche size $S_{max}$ for large steps $f \to \infty$? $S_{max}$ is now the maximum of many independent avalanche sizes $S_1...S_n$, and as mentioned in the introduction we expect its distribution to be a universal extreme value distribution.
Since only large $S_0$ are relevant in (4), the exponent can be approximated by

$\displaystyle \frac{e^{-k^2 S_0/4}}{\sqrt{\pi S_0}} - \frac{k}{2}\text{erfc} \frac{k\sqrt{S_0}}{2} \approx \frac{4}{k^2 \sqrt{\pi} S_0^{3/2}}e^{-k^2 S_0/4}.$    (7)

Inserting this back into (4), we see that for large $f$ the distribution $P_f(S_{max})$ is centered around $S_f := \frac{4}{k^2} \log f$. The cumulative distribution function (4) is well approximated by

$\displaystyle F_f(S_{max}) \approx \exp\left[- \frac{2}{k^2 \sqrt{\pi} S_f^{3/2}}e^{-k^2 (S_{max}-S_f)/4} \right].$    (8)

This is, up to rescaling, the cumulative distribution function of the Gumbel extreme-value distribution $e^{-e^{-x}}$. It is easy to check this universal asymptotic form numerically, see figure 2 below. Note that here the convergence here is much slower than for the total displacement shown in figure 1, since the typical scale for $S_{max}$ only grows logarithmically with $f$.

Figure 2: Distribution of the size of the largest avalanche $S_{max}$ for force steps of increasing size $f$, as given by eq. (5). Observe how the distribution converges to the universal Gumbel limit in eq. (8) indicated by black dotted lines.

For some applications, the limit of a very soft spring, $k \to 0$, is important. I leave the details to the reader but the main picture is that the exponential decay for large $S_0$ in eq. (6) is replaced by a power law $S_0^{-1/2}$. Correspondingly, the universal extreme-value distribution observed for large force steps $f$ is no longer the Gumbel distribution (8) but instead a Fréchet distribution.

Side note: The minimal avalanche size

One may be tempted to approach similarly the problem of the minimal avalanche size for a slow ramp-up of the applied force. However, this is not well-defined: Due to the roughness of the Brownian force landscape, as we increase the force more and more slowly, the size of the smallest avalanche decreases more and more. Hence, its distribution will always be discretization-dependent and will not yield a finite result such as eq. (4) in the continuum limit.

All this gives a consistent picture of the maximal avalanche in the ABBM model. I find it really nice that it is so simple, knowing the avalanche size distribution (1), to express the distribution of the size of the largest avalanche in closed form and understand how it behaves!

Written by inordinatum

August 19, 2014 at 9:53 pm

Simulating Directed Polymers and the Tracy-Widom Distribution

A few posts ago I briefly talked about the problem of a directed polymer in a random medium. In that post, I discussed the scaling behaviour of the polymer fluctuations due to the disordered environment, i.e. its roughness exponent, in $2$ dimensions.

As with many other statistical physics problems, for gaining an intuitive understanding it is also useful to perform some numerical simulations. In this post I will show how the directed polymer partition sum can be simulated using Wolfram Mathematica. I will also compare the results to the known exact result for the distribution of the free energy, the Tracy-Widom distribution.

The transfer-matrix algorithm

The partition sum $Z$ of the directed polymer is defined as the sum over all directed paths $\Gamma$ of the Boltzmann weights $e^{-\beta E_\Gamma}$ of each path. The energy $E_\Gamma$ of the path $\Gamma$ is the sum over the random energies $E_i$ of the sites $i$ which are touched by $\Gamma$. $\beta = \frac{1}{T}$ is the inverse temperature.

In any space dimension, this partition sum can be evaluated rapidly by using a transfer-matrix algorithm. Let us order the lattice along the longitudinal direction of the directed polymer $t$. Then, the partition sum at site $x, t$ is obtained from the partition sums at this site’s neighbours $y\wedge x$ in the preceding slice $t-1$:

$\displaystyle Z(x,t) = e^{-\beta E_{x,t}}\sum_{y \wedge x} Z(y,t-1)$.

For the free energy $F(x,t) := -\frac{1}{\beta}\ln Z(x,t)$, we obtain

$\displaystyle F(x,t) = E_{x,t} - \frac{1}{\beta} \log \sum_{y \wedge x} e^{-\beta F(y,t-1)}$.

Let us focus on the case of very low temperatures, $\beta \to \infty$. Then the logarithm is dominated by the neighbour $y$ with the minimal (i.e. most negative) free energy $F(y,t-1)$. Hence we obtain

$\displaystyle F(x,t) = E_{x,t} + \min_{y \wedge x} F(y,t-1)$.

This also makes sense intuitively: At zero temperature, the partition sum of the directed polymer is dominated by the minimal-energy configuration. The “cost” of getting to site $x$ is the cheapest “cost” of getting to one of its neighbours, augmented by the (fixed) cost $E(x,t)$ of going to site $x$.

The Mathematica simulation code

The following short Mathematica routine returns the free energy at the final time $t$ for a directed polymer


fe =
Compile[{{wd, _Integer}, {len, _Integer}},
Module[{pot1, pot2, fval},
pot1 = RandomVariate[NormalDistribution[], {len, wd}];
pot2 = RandomVariate[NormalDistribution[], {len, wd}];
fval = ConstantArray[0., wd];
Do[fval = (fval - (fval - RotateLeft[fval]) UnitStep[
fval - RotateLeft[fval]]) + pot1[[k]];
fval = (fval - (fval - RotateRight[fval]) UnitStep[
fval - RotateRight[fval]]) + pot2[[k]];, {k, 1, len}];
Return[fval]]];



k is the longitudinal coordinate of the polymer, and runs from 1...len. The vector fval has wd components and contains the free energy values at the current value of k, for all transversal coordinates x=1...wd.
For each k I perform two steps forward, one where site i neighbours on i+1 and i, and one where site i neighbours on i and i-1. This ensures that in the end one obtains a standard square lattice of size 2len$\times$wd, where the directed polymer can choose between two symmetric alternatives at each time step. pot1 and pot2 are len$\times$wd matrices containing the random energies for each site, for the even and odd values of k, respectively.

Since Mathematica‘s built-in Min function does not operate componentwise on vectors, I emulate it by using the componentwise UnitStep function:

a-(a-b)UnitStep[a-b] = Min[a,b].

At $t=0$, I use a free initial condition: $F(x,t=0)=0 \Leftrightarrow Z(x,t=0)=1$, i.e. the polymer end at $t=0$ can be anywhere. Modifying the line

fval = ConstantArray[0., wd];,

one can change this e.g. to a fixed initial condition where the polymer end at $t=0$ is fixed.
The returned value is the vector $F(x,t)$ where x=1...wd and t=2len is the final time.

For example, to generate $10^4$ samples of the free energy for a directed polymer on a $256\times 256$ lattice, with the final end fixed, and the initial end free, you would call

 Table[fe[256,128][[1]], {10000}] 

Note that the [[1]] picks out the free energy for a fixed final site. This simulation takes about a minute on my laptop.

The Tracy-Widom distribution of the directed polymer free energy

Theory predicts the following form for the free energy $F$ of the directed polymer model in $2$ dimensions:

$\displaystyle F(t) = ct - A t^{1/3}\chi + ...$,

where $...$ indicates terms subdominant in $t$. $c$ is a constant and $c t$ is the nonuniversal, deterministic part of the free energy. The fluctuations of the free energy due to randomness of the disorder, described by the second term, are universal: The random variable $\chi$ is distributed according to a Tracy-Widom distribution, and only its amplitude $A$ depends on discretisation details.

The precise distribution depends on the symmetries imposed by the initial condition.

• For a polymer free at $t=0$, $P(\chi \leq s) = F_2(s)$, where $F_2(s)$ is the Tracy-Widom distribution for the Gaussian Unitary Ensemble (GUE).
• For a polymer fixed at $t=0$, $P(\chi \leq s) = F_1(s)$, where $F_1(s)$ is the Tracy-Widom distribution for the Gaussian Orthogonal Ensemble (GOE).

The functions $F_1$ and $F_2$ are solutions of Painlevé differential equations. Tables of their numerical values can be found with very high precision e.g. on the web page of M. Prähofer.

Tracy-Widom distribution and directed polymer free energy.

Blue line: Tracy-Widom distribution $F_1$ of the GOE ensemble.
Red line: Tracy-Widom distribution $F_2$ of the GUE ensemble.
Yellow line: Gaussian distribution.
Histogram: $10^5$ simulations of a $256\times 256$ directed polymer with free (blue) and fixed (red) initial condition, respectively.

All distributions are normalized so that mean=0 and variance=1.

In the figure on the right you can see how this theoretical prediction compares to numerical simulations using the code above. To get rid of the nonuniversal constants $c$ and $A$, I consider the “normalized” free energy

$f_n := \left(F-\left\langle F \right\rangle\right)/\sqrt{\left\langle F^2 \right\rangle -\left\langle F \right\rangle^2}$.

We see that its distribution observed in directed polymer simulations has a skewed shape, which agrees very well with the Tracy-Widom prediction (clearly much better than with the symmetric Gaussian distribution). In the tails, one can even “almost” distinguish the two universality classes corresponding to different initial conditions. For the difference to be really significant, one would need to run some longer simulations though.

Further optimizations

Naturally one would like to make the simulation even faster, in order to have better statistics and reach higher system sizes. One may think that implementing the code above in a low-level programming language (e.g. C++) would yield a large performance gain. However, it turns out that a naive C++ implementation (where the operations on the free-energy vector are implemented by a simple loop over the transversal coordinate $x$) is actually about three times slower than the Mathematica code above. This is since such array-wise operations can be optimized by using modern processor SIMD extensions, which are utilized in Mathematica‘s routines.

Of course, one can also take advantage of this in C++. One can either hand-optimize the loop over transversal coordinates using the processor-dependent SIMD routines, or use an array math library. For example, I implemented the transfer-matrix algorithm above in C++ using the vector routines from Intel’s Math Kernel Library, and was able to achieve a speed-up by a factor of five compared to the Mathematica code. Still, considering the ease of programming and debugging, I find that the simple Mathematica implementation fares amazingly well!

If you’re interested in more details, references, higher-dimensional versions, the C++ code, or anything related, go ahead and drop me a comment below — I’ll extend this post if there is interest 😉

Written by inordinatum

January 22, 2014 at 11:27 pm

Posted in Directed Polymers

Fokker-Planck equation for a jump diffusion process

One of the simplest stochastic processes is a Brownian motion with drift, or a diffusion process:

$\displaystyle \begin{array}{rl} \dot{x}(t) =& \mu + \sigma\,\xi(t), \\ x(t) =& \mu t + \sigma \,W(t). \end{array}$   (1)

Here, $\xi(t)$ is Gaussian white noise with mean zero and variance $\left\langle \xi(t_1)\xi(t_2)\right\rangle = \delta(t_1-t_2)$. Its integral $W(t) = \int_0^t \mathrm{d} t'\,\xi(t')$ is a Brownian motion.
Continuous Itô stochastic processes such as eq. (1) are insufficient for applications where the random variable $x$ may jump suddenly (such as in avalanches). A natural extension of (1) for modelling this is a so-called jump-diffusion process. Let us suppose that our jump sizes $s$ are positive, independent and identically distributed with density $Q(s)$. Then, the jump diffusion process $y(t)$ is

$\displaystyle y(t) = x(t) + \sum_{i=0}^{N(t)} s_i = \mu t + \sigma\, W(t) + \sum_{i=0}^{N(t)} s_i,$   (2)

where $s_i, i\in \mathbb{N}$ are i.i.d. jump sizes as above, and $N(t)$ is the number of jumps encountered up to time $t$. For simplicitly, let us assume that jumps occur independently with rate $\lambda$, i.e. that the probability to have a jump in a time interval $\mathrm{d}t$ is $\lambda\, \mathrm{d}t$. Then, $N(t)$ is a Poisson process with rate $\lambda$.

It is well-known that the diffusion process $x(t)$ in (1) is equivalently described by a partial differential equation for the distribution $P(x)$ of $x$, the Fokker-Planck equation (FPE)

$\displaystyle \partial_t P_t(x) = \frac{\sigma^2}{2}\partial_x^2 P_t(x) - \mu \partial_x P_t(x)$.   (3)

This representation is useful e.g. for first-passage problems: they correspond to various boundaries introduced in the PDE (3). So how does one generalise the Fokker-Planck (3) to the case of the jump-diffusion process $y(t)$ in (2)? I will explain in the following that the answer is

$\displaystyle \partial_t P_t(y) = \frac{\sigma^2}{2}\partial_y^2 P_t(y) - \mu \partial_y P_t(y)- \lambda P_t(y) + \lambda\,\int_0^\infty \mathrm{d}s \,Q(s)P_t(y - s)$,   (4)

and then discuss a specific example.

1. Deriving the jump-diffusion FPE

Let us consider a time step from $t$ to $t + \mathrm{d}t$. The probability for a jump to occur during this interval is $\lambda\,\mathrm{d}t$, so

$\displaystyle P_{t+\mathrm{d}t}(y) = (1-\lambda \mathrm{d}t)\left\langle P_t\left(y-\mu \,\mathrm{d}t - \left[W(t+\mathrm{d}t)-W(t)\right] \right) \right\rangle_W + \lambda \mathrm{d}t\,\left\langle P_t(y-s) \right\rangle_s$,   (5)

where $\left\langle \cdot \right\rangle_W$ denotes averaging over all realizations of the Brownian motion $W$, and $\left\langle \cdot \right\rangle_s$ denotes averaging over the distribution $Q(s)$ of the jump size $s$. Since the jump term is already multiplied by the jump probability $\propto \mathrm{d}t$, the drift and noise contributions there are of higher order in $\mathrm{d}t$ and were dropped.

The averaging over the jump size in (5) yields a convolution with the jump size distribution $Q(s)$:

$\displaystyle \left\langle P_t(y-s) \right\rangle_s = \int_0^\infty \mathrm{d}s \,Q(s)P_t(y - s)$.

The average over the noise $W$ in (5) is the same as for standard diffusion. During the interval $\mathrm{d}t$, the increment of the noise term in (2) is

$\displaystyle W(t+\mathrm{d}t)-W(t) = \int_t^{t+\mathrm{d}t}\mathrm{d}t'\,\xi(t') =: V\sqrt{\mathrm{d}t}$,

where the last equality is a definition for $V$.
Since $W$ is a Brownian motion, $V$ is normally distributed:

$\displaystyle \mathcal{N}(V) = \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{V^2}{2}\right)$.

Thus, the average over $W$ in (5) is

$\displaystyle \begin{array}{rl} \displaystyle \left\langle P_t\left(y-\mu \,\mathrm{d}t - \left[W(t+\mathrm{d}t)-W(t)\right] \right) \right\rangle_W = & \displaystyle \int_{-\infty}^\infty \mathrm{d}V\,\mathcal{N}(V)\, P_t\left(y-\mu \,\mathrm{d}t - V \sqrt{\mathrm{d}t} \right) \\ = & \displaystyle P_t(y) - \mu\, \mathrm{d}t\, \partial_y P_t(y)- \sqrt{\mathrm{d}t} \partial_y P_t(y) \int_{-\infty}^\infty \mathrm{d}V\, \mathcal{N}(V)\, V \\ & \displaystyle + \frac{1}{2}\mathrm{d}t \partial_y^2 P_t(y)\, \int_{-\infty}^\infty \mathrm{d}V\, \mathcal{N}(V)\, V^2 + \mathcal{O}(\mathrm{d}t)^2 \\ =& \displaystyle P_t(y) - \mu\, \mathrm{d}t\, \partial_y P_t(y) + \frac{1}{2}\mathrm{d}t\,\partial_y^2 P_t(y) \end{array}$.

In the last line we used the fact that the normal distribution has mean zero and variance 1, and dropped all terms of higher order than $\mathrm{d}t$. Inserting all this into (5), we obtain the jump-diffusion FPE (4).

2. Example: Exponential jumps

Now let us consider a simple example, where the jump sizes are distributed exponentially:

$\displaystyle Q(s) = \alpha\,\exp(-\alpha s),\quad\quad \alpha > 0$.

We will compute the distribution of $y(t)$, starting from $y(0)=0$.
For this, we solve the jump-diffusion FPE (4) with the initial condition $P_{t=0}(y) = \delta(y)$. Let us take a Fourier transform of (4) from $y$ to $k$:

$\begin{array}{rl} \displaystyle \tilde{P}_t(k) := & \displaystyle \int_{-\infty}^\infty \mathrm{d}y\,e^{i k y}\,P_t(y), \\ \displaystyle \partial_t \tilde{P}_t(k) = & \displaystyle -\frac{\sigma^2}{2}k^2\,\tilde{P}_t(k) + i\mu k\, \tilde{P}_t(k) - \lambda \tilde{P}_t(k) + \lambda\tilde{Q}(k)\tilde{P}_t(k) \end{array}$.

The different spatial modes decouple, and instead of a partial differential equation we obtain an ordinary differential equation for each value of the Fourier variable $k$.
For the above exponential form of the jump size distribution $Q$,

$\displaystyle \tilde{Q}(k) = \frac{\alpha}{\alpha - i k}$.

Furthermore, the initial condition $P_{t=0}(y) = \delta(y)$ gives $\tilde{P}_{t=0}(k) = 1$. Hence, the solution $\tilde{P}$ reads

$\displaystyle \tilde{P}_t(k) = \exp\left[ \left( -\frac{\sigma^2}{2}k^2+ i\mu k + i\frac{\lambda k}{\alpha - i k} \right) t\right]$.

While it does not seem feasible to invert this Fourier transform to obtain a closed expression for $P_t(y)$ (but if anyone has an idea, let me know!), $\tilde{P}_t(y)$ is already enough to determine the moments of $y(t)$. Taking derivatives, we obtain for example

$\displaystyle \begin{array}{rl} \left\langle y(t) \right\rangle = & -i\partial_k \big|_{k=0} \tilde{P}_t(k) = \left(\frac{\lambda}{\alpha}+\mu\right)t \\ \left\langle y(t)^2 \right\rangle^c = & -\partial_k^2 \big|_{k=0} \tilde{P}_t(k) + \left[\partial_k \big|_{k=0} \tilde{P}_t(k)\right]^2 \\ = & \left(\frac{2\lambda}{\alpha^2}+\sigma^2\right)t, \\ ... & \end{array}$.

Similarly, solving the FPE (4) with an absorbing boundary allows computing first-passage times (or at least their moments) for our jump-diffusion process.

Have fun, and do let me know if you have any comments!

Written by inordinatum

November 20, 2013 at 8:04 pm

Tricks for inverting a Laplace Transform, part V: Pole Decomposition

This is a continuation of my articles on methods for inverting Laplace transforms. You can find the previous parts here: part I (guesses based on series expansions), part II (products and convolutions), part III, part IV (substitutions).

1. Result

In this post I will explain how to find the inverse $P(x)$ of the following Laplace transform:

$\displaystyle \int_0^\infty \mathrm{d}x\, e^{-sx}P(x) = \frac{\tanh \sqrt{2s}}{\sqrt{2s}}$.   (1)

The solution is given in terms of the Jacobi theta function $\theta_2$ (which can be re-expressed in terms of Jacobi elliptic functions, though I won’t discuss that here in detail):

$\displaystyle P(x) = \frac{1}{2}\theta_2 \left(0; e^{-x\pi^2/2}\right)$.   (2)

By taking derivatives or integrals with respect to $s$, which are equivalent to multiplication or division of $P$ by $x$, one can obtain many other Laplace transform identities, including for example

$\displaystyle \int_0^\infty \mathrm{d}x\, e^{-sx}\frac{1}{2x}\theta_2 \left(0; e^{-x\pi^2/2}\right) = \ln \cosh \sqrt{2s}$.

If anyone manages to find a systematic list of those, I’d be very grateful. But for now let’s just see how one obtains (2).

2. Derivation

First, we use the classic decomposition of trigonometric functions in infinite products due to Euler:

$\displaystyle \begin{array}{rl} \sinh z &= z \prod_{n=1}^{\infty}\left(1+\frac{z^2}{\pi^2 n^2}\right) \\ \cosh z &= \prod_{n=1}^{\infty}\left(1+\frac{z^2}{\pi^2 \left(n-\frac{1}{2}\right)^2}\right) \end{array}$.

From the second identity, we can obtain a partial fraction decomposition of $\tanh z$ (following this post on StackExchange):

$\displaystyle \begin{array}{rl} \tanh z = & \partial_z \ln \cosh z = \partial_z \sum_{n=1}^{\infty} \ln \left(1+\frac{z^2}{\pi^2 \left(n-\frac{1}{2}\right)^2}\right) \\ = & \sum_{n=1}^{\infty} \frac{\frac{2z}{\pi^2 \left(n-\frac{1}{2}\right)^2}}{1+\frac{z^2}{\pi^2 \left(n-\frac{1}{2}\right)^2}} \\ = & 2z \sum_{n=1}^{\infty} \frac{1}{\frac{\pi^2(2n-1)^2}{4}+z^2}. \end{array}$.

Applying this to the right-hand side of (1), we obtain a sum over simple poles:

$\displaystyle \int_0^\infty \mathrm{d}x\, e^{-sx}P(x) = \frac{\tanh \sqrt{2s}}{\sqrt{2s}} = 2\sum_{n=1}^{\infty} \frac{1}{\frac{\pi^2(2n-1)^2}{4}+2s}$

The Laplace inverse of a simple pole is just an exponential, $\int_0^\infty \mathrm{d}x\, e^{-sx}\,e^{-p x}=\frac{1}{p+s}$. By linearity of the Laplace transform, we can invert each summand individually, and obtain an infinite sum representation for $P(x)$:

$\displaystyle P(x) = \sum_{n=1}^{\infty} \exp\left(-\frac{\pi^2(2n-1)^2}{8}x\right)$

This sum can now be evaluated with Mathematica‘s Sum command, or by hand using the representation of theta functions in terms of the nome, for argument $z=0 \Leftrightarrow w=1$ and $q=e^{-\pi^2 x/2}$. This finally gives the solution as claimed above:

$\displaystyle P(x) = \frac{1}{2}\theta_2 \left(0; e^{-x\pi^2/2}\right)$.   (2)

Have fun!

Written by inordinatum

April 15, 2013 at 10:27 pm

Posted in Maths, Technical Tricks

Mean-field solution of the Random Field Ising Model

A happy new year to all the blog readers out there!

Today I will discuss a classical model of a disordered system that is extremely simple to write down: The Ising model with quenched (i.e. time-invariant) local random fields.
This model, also dubbed the Random-Field Ising Model (RFIM), is highly non-trivial. Above two dimensions, depending on the strength of the local random fields, it exhibits a phase transition between an ordered (ferromagnetic) and a disordered (paramagnetic) phase. When applying an additional homogeneous time-varying magnetic field, the local random fields prevent some regions from flipping. This leads to interesting phenomena like avalanches and hysteresis. The avalanche sizes, and the shape of the hysteresis loop, vary depending on the location in the phase diagram. Experimentally, this phenomenology – including the disorder-induced phase transition, avalanches, and hysteresis – describes very well e.g. helium condensation in disordered silica aerogels.

In this blog post, I will discuss the solution of the RFIM in the fully connected limit. This means that the ferromagnetic interaction between any two spins, no matter how far apart, has the same strength. This limit, also called the mean-field limit, is supposed to describe the model accurately for a sufficiently high spatial dimension. Just how high is “sufficient” is still a matter of debate – it may be above four dimensions, above six dimensions, or only when the dimension tends to infinity. Nevertheless, the mean-field limit already captures some of the interesting RFIM phenomenology, including the ordered-disordered phase transition. This is what I will discuss in this blog post. I will follow closely the original paper by T. Schneider and E. Pytte from 1977, but I’ve tried to extend it in many places, in order to make the calculations easier to follow.

1. The mean-field RFIM Hamiltonian and its free energy

The energy of $N$ fully connected Ising spins $S_i=\pm 1$, $i=1...N$, subject to local random fields $h_i$, is:

$\displaystyle H[S]=-\frac{J}{N}\sum_{i,j}S_i S_j -\sum_i S_i h_i$

The prefactor $1/N$ in the first term (the ferromagnetic interaction energy) is chosen so that the scaling of this term with $N$ is the same as the scaling of the second term (containing the random fields).

The corresponding partition sum $Z$ is the thermal average over all spin configurations:

$\displaystyle Z=\sum_{\{S_i^\alpha=\pm 1\}} \,e^{-\beta H[S]}$

As usual, $\beta = T^{-1}$ is the inverse temperature of the system.
In order to see the phase transition, we need to consider the free energy $F$, averaged over all random fields $h$:

$\displaystyle \left\langle F\right\rangle_h = -T\left\langle \ln\, Z \right\rangle_h$

Through a somewhat lengthy calculation (which I explain in detail in the last section 3. of this post), one obtains the following expression for the disorder-averaged free energy:

$\displaystyle \left\langle F\right\rangle_h = N\left[J m^2 - T \int \mathrm{d} h\, p(h) \ln 2 \cosh \beta \left(2 J m + h\right) \right]$

where the typical magnetization $m$ is fixed through the implicit (self-consistent) equation

$\displaystyle m = m_{SC}(m):=\int \mathrm{d} h\, p(h) \tanh \beta \left(2 J m + h\right)$

$p(h)$ is the probability distribution of the random fields, assumed to be Gaussian with mean zero and variance $\sigma^2$:

$\displaystyle p(h) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(-\frac{h^2}{2\sigma^2}\right)$

Although the expression above for the free energy is written in terms of $p(h)$, strictly speaking I will show it only for the case where $p(h)$ has this Gaussian form. Whether the formula for the free energy also holds for other $p(h)$ is a good question which I’d like to understand better – if any expert happens to be reading this and can clarify, it would be great!

Now, let’s look at what information on the phase diagram of the model we can deduce from these expressions.

2. Phase diagram of the mean-field RFIM

Self-consistent equation for the magnetization of the RFIM. The dashed line indicates the consistency condition $m_{SC}(m)=m$. The red line is an example of $m_{SC}(m)$ in the ferromagnetic phase (here $J=1, T=0.5, \sigma=1$) It has a non-trivial value $m>0$ where it satisfies the self-consistency condition, i.e. intersects the dashed line. On the other hand, the yellow line is an example of $m_{SC}(m)$ in the paramagnetic phase (here $J=1, T=0.5, \sigma=2$). There, the only point satisfying the self-consistency condition is the trivial solution $m=0$.

The self-consistent equation for the magnetization $m$ given above has a trivial solution $0=m_{SC}(0)$ for all values of the parameters $\sigma, J, T$. Similar to what happens in the mean-field solution for the standard Ising model, if $\beta$ is sufficiently large, the slope $m_{SC}'(0)$ is larger than one and there is another non-trivial solution $m>0$ (see figure on the right). This indicates the existence of ferromagnetic order. Thus, the phase boundary between the paramagnetic and the ferromagnetic phase is given precisely by the condition

$\displaystyle \begin{array}{rl} m_{SC}'(m=0) =& 1 \\ \Leftrightarrow 2\beta J \int \mathrm{d} h\, \frac{p(h)}{\left(\cosh \beta h\right)^2} = &1 \end{array}$

One of the three parameters $\beta, J, \sigma$ can be eliminated by rescaling $h$. For example, we can eliminate $\beta$ and express the phase transition line in terms of the dimensionless variables $\tilde{J}:=\beta J$, $\tilde{\sigma}:= \beta \sigma$, $\tilde{h}:=\beta h$:

$\displaystyle 2 \tilde{J}(\tilde{\sigma}) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi\tilde{\sigma}^2}}}\frac{e^{-\frac{1}{2}\tilde{h}^2/\tilde{\sigma}^2}}{\left(\cosh \tilde{h}\right)^2}\right]^{-1}$

Another way is to scale away $\sigma$, setting $\hat{J}:= J/\sigma$, $\hat{\beta}:=\sigma \beta$.

Then one has the following expression for the phase boundary

$\displaystyle 2\hat{J}(\hat{\beta}) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi}}}\frac{e^{-\frac{1}{2}\tilde{h}^2/\hat{\beta}^2}}{\left(\cosh \tilde{h}\right)^2}\right]^{-1}$

With this expression, we see more clearly that there is a well-defined zero-temperature limit. When $\hat{\beta}\to\infty$, we get a transition at

$\displaystyle 2\hat{J}(\infty) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi}}}\frac{1}{\left(\cosh \tilde{h}\right)^2}\right]^{-1} = \sqrt{\pi/2}$

This is quite interesting, since it indicates a phase transition driven purely by the fluctuations of the quenched disorder, without any thermal noise!

Phase diagram of the mean-field RFIM

More generally, we can easily plot the phase diagram in terms of $\hat{J}, \hat{\beta}$ using Mathematica; see the figure on the left.

Of course, the expression for the free energy given above can also be used to deduce more elaborate observables, like the susceptibility, etc. For this, I refer to the original paper by T. Schneider and E. Pytte. For example, one can find that near the phase transition the magnetization with a power-law,

$\displaystyle m(T) \sim (T-T_c)^{1/2}$

This is the typical mean-field power law, which is also found when studying the phase transition in the pure Ising model on the mean-field level.

Now that we’re convinced that these formulas for the free energy and the magnetization (boxed in orange above) are useful for understanding the physics of the mean-field RFIM, I will explain in detail how they were derived.

3. Calculating the free energy using the replica trick

Since averages over the logarithm of a random variable are not easy to compute, we use the replica trick to re-express the free energy in terms of moments of $Z$:

$\displaystyle \left\langle F\right\rangle_h = -T \frac{\partial}{\partial n}\Big|_{n=0}\left\langle Z^n \right\rangle_h$

$Z^n$ is the partition sum of $n$ copies, or replicas, of the system, feeling the same random fields $h$. Indexing these copies by greek indices $\alpha=1...n$, we get

$\displaystyle \left\langle Z^n \right\rangle_h = \sum_{\{S_i^\alpha=\pm 1\}} e^{\frac{\beta J}{N} \sum_\alpha \sum_{i, j}S_i^\alpha S_j^\alpha}\left\langle e^{\beta \sum_i \left(\sum_\alpha S_i^\alpha\right) h_i}\right\rangle_h$

Since the $h_i$ are independent and Gaussian (with mean 0 and variance $\sigma^2$), $\left\langle e^{\lambda h_i}\right\rangle_h = \int \mathrm{d}h\,p(h)\,e^{\lambda h} = e^{\sigma^2\lambda^2/2}$. Then

$\displaystyle \left\langle Z^n \right\rangle_h = \sum_{\{S_i^\alpha=\pm 1\}} \exp\left[ \frac{\beta J}{N} \sum_\alpha \sum_{i, j} S_i^\alpha S_j^\alpha + \frac{\beta^2\sigma^2}{2}\sum_i\left(\sum_\alpha S_i^\alpha\right)^2\right]$

We have performed the disorder averaging, so what remains is the partition sum of $n$ copies/replicas of a non-disordered fully-connected Ising model. They are interacting via the second term in the exponential, which came from the disorder averaging. We will now compute the partition sum of this replicated Hamiltonian. The plan for this is the following:

• Decouple the individual sites $i$ using a Hubbard-Stratonovich transformation.
• Apply the saddle-point method to compute the integral. Due to the decoupling, the saddle-point is controlled by the number of sites $N$. Thus, this procedure is exact in the thermodynamic limit $N\to\infty$.

To apply the Hubbard-Stratonovich transformation, we first rewrite the ferromagnetic interaction as

$\displaystyle \sum_\alpha \sum_{i, j} S_i^\alpha S_j^\alpha = \sum_\alpha \left(\sum_i S_i^\alpha \right)^2$

and then use

$\displaystyle \exp\left[\frac{\beta J}{N} \sum_\alpha \left(\sum_i S_i^\alpha \right)^2\right] = \frac{1}{(2\pi)^{n/2}}\int\prod_\alpha\mathrm{d}x_\alpha\,\exp\left[\sum_\alpha \left(x_\alpha \sqrt{2\frac{\beta J}{N}}\sum_i S_i^\alpha - \frac{1}{2}x_\alpha^2\right)\right]$

In order to have an exponential scaling homogeneously with the system size, we set $x = \sqrt{N} \tilde{x}$:

$\displaystyle \left\langle Z^n \right\rangle_h = \left(\frac{N}{2\pi}\right)^{n/2}\sum_{\{S_i^\alpha=\pm 1\}} \int\prod_\alpha\mathrm{d}\tilde{x}_\alpha\,\exp\left[-\frac{N}{2}\sum_\alpha\tilde{x}_\alpha^2 + \sqrt{2\beta J}\sum_i \sum_\alpha \tilde{x}_\alpha S_i^\alpha + \frac{\beta^2\sigma^2}{2}\sum_i\left(\sum_\alpha S_i^\alpha\right)^2\right]$

The exponential is now a linear sum over all sites $i$, without any interaction between them. Thus, the partition sum is the $N$-th power of the partition sum of a single site:

$\displaystyle \left\langle Z^n \right\rangle_h = \left(\frac{N}{2\pi}\right)^{n/2} \int\prod_\alpha\mathrm{d}\tilde{x}_\alpha\,\exp\, N\left[-\frac{1}{2}\sum_\alpha\tilde{x}_\alpha^2 + \ln Z_1(\tilde{x})\right],$

where $Z_1(\tilde{x})$ is the partition sum for all replicas on a single site:

$\displaystyle Z_1(\tilde{x}) = \sum_{\{S^\alpha=\pm 1\}} \exp\left[\sqrt{2\beta J}\sum_\alpha \tilde{x}_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2\right]$

In the thermodynamic limit $N\to\infty$, the $x$ integral in our expression for $\left\langle Z^n \right\rangle_h$ is dominated by its saddle point (due to the fact that the exponential is linear in $N$). At the saddle point, by symmetry we can assume all the $x_\alpha$ to be equal. Setting $\tilde{x}_\alpha := x$ for all $\alpha$, the exponent of the integrand in the above expression for $\displaystyle \left\langle Z^n \right\rangle_h$ becomes

$\displaystyle N\left[-\frac{n}{2}x^2 + \ln Z_1(x)\right], \quad \text{where} \quad Z_1(x) = \sum_{\{S^\alpha=\pm 1\}} \exp\left[\sqrt{2\beta J}x \sum_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2\right]$.

Hence, the saddle-point value $x=x_m$ is fixed by the implicit equation

$\displaystyle \partial_x \big|_{x=x_m}\left[-\frac{n}{2}x^2 + \ln Z_1(x)\right]=0 \Leftrightarrow n\, x_m = \frac{\partial_{x}|_{x=x_m}Z_1(x)}{Z_1(x_m)} = \sqrt{2\beta J}\frac{\sum_{\{S^\alpha=\pm 1\}} \left(\sum_\alpha S^\alpha\right)e^{A[S,x_m]}}{\sum_{\{S^\alpha=\pm 1\}} e^{A[S,x_m]}},$

where

$\displaystyle A[S,x] = \sqrt{2\beta J}\sum_\alpha \tilde{x}_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2.$

We see that effectively, the saddle-point location $x_m$ corresponds to a fixed average magnetization $m := x_m / \sqrt{2\beta J}$, for the replicated spins $S^\alpha$ on a single lattice site, subject to the “Hamiltonian” $A[S,x]$. Rewriting everything in terms of the magnetization $m$, we get

$\displaystyle \left\langle Z^n \right\rangle_h \approx \exp\, N\left[-n\beta J m^2 + \ln Z_1(m)\right],$

$\displaystyle \begin{array}{rl} Z_1(m) = & \sum_{\{S^\alpha= \pm 1\}}e^{A[S,m]} \\ A[S,m] = & 2\beta J m\sum_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2 \end{array}$

$m$ is fixed by the implicit (self-consistent) equation

$\displaystyle n \, m = \frac{1}{Z_1(m)}\sum_{\{S^\alpha= \pm 1\}}\left(\sum_\alpha S^\alpha\right)e^{A[S,m]}$

Now, to further simplify the result, we can compute the sums over all spin configurations $\{S^\alpha\}$ by performing another Hubbard-Stratonovich transformation:

$e^{A[S,m]} = \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\exp\left[-\frac{1}{2}s^2 + (2\beta J m + \beta \sigma s)\sum_\alpha S^\alpha\right]$

Then, the single-site partition sum can be computed as following:

$\displaystyle \begin{array}{rl} Z_1(m) = &\sum_{\{S^\alpha= \pm 1\}}e^{A[S,m]} \\ = &\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \prod_{\alpha}\sum_{S^\alpha= \pm 1}e^{(2\beta J m + \beta \sigma s)S^\alpha} \\ = & \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2}\left[2\cosh(2\beta J m + \beta \sigma s)\right]^n \\ = &\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta J m + \beta \sigma s)\right] \end{array}$

Similarly, the self-consistent equation for the mean magnetization simplifies to:

$\displaystyle \begin{array}{rl} n \, m = &\frac{1}{Z_1(m)}\sum_{\{S^\alpha= \pm 1\}}\left(\sum_\alpha S^\alpha\right)e^{A[S,m]} \\ =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \frac{1}{2\beta J }\frac{\partial}{\partial m}\prod_{\alpha}\sum_{S^\alpha= \pm 1}e^{(2\beta Jm + \beta \sigma s)S^\alpha} \\ =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \frac{1}{2\beta J}\frac{\partial}{\partial m} \left[2\cosh(2\beta Jm + \beta \sigma s)\right]^n \\ =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta Jm + \beta \sigma s)\right]\,n\, \tanh (2\beta Jm + \beta \sigma s) \\ \Rightarrow m = & \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta Jm + \beta \sigma s)\right]\tanh (2\beta Jm + \beta \sigma s) \end{array}$

In order to obtain the disorder-averaged free energy, as discussed above we need to take the derivative with respect to $n$ at $n=0$:

$\displaystyle \begin{array}{rl} \left\langle F\right\rangle_h = &-T \frac{\partial}{\partial n}\Big|_{n=0}\left\langle Z^n \right\rangle_h \\ = & N\left[Jm^2 -\frac{\partial}{\partial n}\Big|_{n=0} Z_1(m) \right] \\ = & N\left[Jm^2 - \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \ln 2\cosh(2\beta Jm + \beta \sigma s)\right] \end{array}$

The magnetization $m$ is fixed by the self-consistent equation at $n=0$:

$\displaystyle m = \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2}\tanh (2\beta Jm + \beta \sigma s)$

By the substitution $h := \sigma s$, these formulae reduce to those shown in section 1, and then used to analyze the phase diagram in section 2.