inordinatum

Physics and Mathematics of Disordered Systems

Posts Tagged ‘mathematica

Simulating Directed Polymers and the Tracy-Widom Distribution

leave a comment »

A few posts ago I briefly talked about the problem of a directed polymer in a random medium. In that post, I discussed the scaling behaviour of the polymer fluctuations due to the disordered environment, i.e. its roughness exponent, in 2 dimensions.

As with many other statistical physics problems, for gaining an intuitive understanding it is also useful to perform some numerical simulations. In this post I will show how the directed polymer partition sum can be simulated using Wolfram Mathematica. I will also compare the results to the known exact result for the distribution of the free energy, the Tracy-Widom distribution.

The transfer-matrix algorithm

The partition sum Z of the directed polymer is defined as the sum over all directed paths \Gamma of the Boltzmann weights e^{-\beta E_\Gamma} of each path. The energy E_\Gamma of the path \Gamma is the sum over the random energies E_i of the sites i which are touched by \Gamma. \beta = \frac{1}{T} is the inverse temperature.

In any space dimension, this partition sum can be evaluated rapidly by using a transfer-matrix algorithm. Let us order the lattice along the longitudinal direction of the directed polymer t. Then, the partition sum at site x, t is obtained from the partition sums at this site’s neighbours y\wedge x in the preceding slice t-1:

\displaystyle Z(x,t) = e^{-\beta E_{x,t}}\sum_{y \wedge x} Z(y,t-1).

For the free energy F(x,t) := -\frac{1}{\beta}\ln Z(x,t), we obtain

\displaystyle F(x,t) = E_{x,t} - \frac{1}{\beta} \log \sum_{y \wedge x} e^{-\beta F(y,t-1)}.

Let us focus on the case of very low temperatures, \beta \to \infty. Then the logarithm is dominated by the neighbour y with the minimal (i.e. most negative) free energy F(y,t-1). Hence we obtain

\displaystyle F(x,t) = E_{x,t} + \min_{y \wedge x} F(y,t-1).

This also makes sense intuitively: At zero temperature, the partition sum of the directed polymer is dominated by the minimal-energy configuration. The “cost” of getting to site x is the cheapest “cost” of getting to one of its neighbours, augmented by the (fixed) cost E(x,t) of going to site x.

The Mathematica simulation code

The following short Mathematica routine returns the free energy at the final time t for a directed polymer


fe = 
  Compile[{{wd, _Integer}, {len, _Integer}}, 
   Module[{pot1, pot2, fval}, 
    pot1 = RandomVariate[NormalDistribution[], {len, wd}];
    pot2 = RandomVariate[NormalDistribution[], {len, wd}];
    fval = ConstantArray[0., wd];
    Do[fval = (fval - (fval - RotateLeft[fval]) UnitStep[
           fval - RotateLeft[fval]]) + pot1[[k]]; 
     fval = (fval - (fval - RotateRight[fval]) UnitStep[
           fval - RotateRight[fval]]) + pot2[[k]];, {k, 1, len}]; 
    Return[fval]]];

k is the longitudinal coordinate of the polymer, and runs from 1...len. The vector fval has wd components and contains the free energy values at the current value of k, for all transversal coordinates x=1...wd.
For each k I perform two steps forward, one where site i neighbours on i+1 and i, and one where site i neighbours on i and i-1. This ensures that in the end one obtains a standard square lattice of size 2len\timeswd, where the directed polymer can choose between two symmetric alternatives at each time step. pot1 and pot2 are len\timeswd matrices containing the random energies for each site, for the even and odd values of k, respectively.

Since Mathematica‘s built-in Min function does not operate componentwise on vectors, I emulate it by using the componentwise UnitStep function:

a-(a-b)UnitStep[a-b] = Min[a,b].

At t=0, I use a free initial condition: F(x,t=0)=0 \Leftrightarrow Z(x,t=0)=1, i.e. the polymer end at t=0 can be anywhere. Modifying the line

fval = ConstantArray[0., wd];,

one can change this e.g. to a fixed initial condition where the polymer end at t=0 is fixed.
The returned value is the vector F(x,t) where x=1...wd and t=2len is the final time.

For example, to generate 10^4 samples of the free energy for a directed polymer on a 256\times 256 lattice, with the final end fixed, and the initial end free, you would call


Table[fe[256,128][[1]], {10000}]

Note that the [[1]] picks out the free energy for a fixed final site. This simulation takes about a minute on my laptop.

The Tracy-Widom distribution of the directed polymer free energy

Theory predicts the following form for the free energy F of the directed polymer model in 2 dimensions:

\displaystyle F(t) = ct - A t^{1/3}\chi + ... ,

where ... indicates terms subdominant in t. c is a constant and c t is the nonuniversal, deterministic part of the free energy. The fluctuations of the free energy due to randomness of the disorder, described by the second term, are universal: The random variable \chi is distributed according to a Tracy-Widom distribution, and only its amplitude A depends on discretisation details.

The precise distribution depends on the symmetries imposed by the initial condition.

  • For a polymer free at t=0, P(\chi \leq s) = F_2(s), where F_2(s) is the Tracy-Widom distribution for the Gaussian Unitary Ensemble (GUE).
  • For a polymer fixed at t=0, P(\chi \leq s) = F_1(s), where F_1(s) is the Tracy-Widom distribution for the Gaussian Orthogonal Ensemble (GOE).

The functions F_1 and F_2 are solutions of Painlevé differential equations. Tables of their numerical values can be found with very high precision e.g. on the web page of M. Prähofer.

Tracy-Widom distribution and directed polymer free energy

Tracy-Widom distribution and directed polymer free energy.

Blue line: Tracy-Widom distribution F_1 of the GOE ensemble.
Red line: Tracy-Widom distribution F_2 of the GUE ensemble.
Yellow line: Gaussian distribution.
Histogram: 10^5 simulations of a 256\times 256 directed polymer with free (blue) and fixed (red) initial condition, respectively.

All distributions are normalized so that mean=0 and variance=1.


In the figure on the right you can see how this theoretical prediction compares to numerical simulations using the code above. To get rid of the nonuniversal constants c and A, I consider the “normalized” free energy

f_n := \left(F-\left\langle F \right\rangle\right)/\sqrt{\left\langle F^2 \right\rangle -\left\langle F \right\rangle^2}.

We see that its distribution observed in directed polymer simulations has a skewed shape, which agrees very well with the Tracy-Widom prediction (clearly much better than with the symmetric Gaussian distribution). In the tails, one can even “almost” distinguish the two universality classes corresponding to different initial conditions. For the difference to be really significant, one would need to run some longer simulations though.

Further optimizations

Naturally one would like to make the simulation even faster, in order to have better statistics and reach higher system sizes. One may think that implementing the code above in a low-level programming language (e.g. C++) would yield a large performance gain. However, it turns out that a naive C++ implementation (where the operations on the free-energy vector are implemented by a simple loop over the transversal coordinate x) is actually about three times slower than the Mathematica code above. This is since such array-wise operations can be optimized by using modern processor SIMD extensions, which are utilized in Mathematica‘s routines.

Of course, one can also take advantage of this in C++. One can either hand-optimize the loop over transversal coordinates using the processor-dependent SIMD routines, or use an array math library. For example, I implemented the transfer-matrix algorithm above in C++ using the vector routines from Intel’s Math Kernel Library, and was able to achieve a speed-up by a factor of five compared to the Mathematica code. Still, considering the ease of programming and debugging, I find that the simple Mathematica implementation fares amazingly well!

If you’re interested in more details, references, higher-dimensional versions, the C++ code, or anything related, go ahead and drop me a comment below — I’ll extend this post if there is interest 😉

Written by inordinatum

January 22, 2014 at 11:27 pm

Tricks for inverting a Laplace Transform, part II: Products and Convolutions

with 4 comments

EDIT: In the meanwhile, I have continued the series of posts on Laplace Transform inversion. You can find the subsequent articles here: part I (guesses based on series expansions), part III, part IV (substitutions), part V (pole decomposition). Enjoy!

Following the previous post on inverting Laplace transforms, here is another trick up the same alley. This one actually considers a generalization of the previous case

Find P(x) such that \int_0^\infty P(x)e^{\lambda x}dx=\left(1-q\lambda\right)^a \left(1-\lambda\right)^b.

As usual, the built-in InverseLaplaceTransform function from Mathematica 8 fails to give a result. To obtain a closed formula manually, note that each of the factors can be easily inverted:
\int_0^\infty e^{\lambda x}R_{a,q}(x)\,dx=(1-q\lambda)^a
has the solution
R_{a,q}(x)=\frac{e^{-\frac{x}{q}}\left(\frac{x}{q}\right)^{-1-a}}{q \Gamma(-a)}.

Hence, using the fact that Laplace transforms of convolutions give products, the solution for P(x) can be written as a convolution:
P(x) = \int_0^x dx' R_{a,q}(x')R_{b,1}(x-x').

Computing the integral gives the following expression for P(x) in terms of the hypergeometric function \,_1 F_1:

P(x) = \frac{e^{-x}q^a x^{-1-a-b}}{\Gamma(-a-b)} \,_1 F_1\left(-a,-a-b,\frac{q-1}{q}x\right)

Enjoy!!

Written by inordinatum

September 3, 2011 at 10:34 am

Tricks for inverting a Laplace Transform, part I: Guesses based on Series

with 4 comments

EDIT: In the meanwhile, I have continued the series of posts on Laplace Transform inversion. You can find the subsequent articles here: part II (products and convolutions), part III, part IV (substitutions), part V (pole decomposition). Enjoy!

Working with probability distributions one frequently meets the problem of inverting a Laplace transform, which is notoriously difficult. An example that I met recently is the following:

Find P(x) such that \int_0^\infty P(x)e^{\lambda x}dx=\left(\frac{1-q\lambda}{1-\lambda}\right)^v.

The built-in InverseLaplaceTransform function from Mathematica 8 fails to give a result. However, a closed formula for P(x) can be obtained using a trick: Consider integer v, for which the right-hand side becomes a rational function. Take its Laplace inverse, and then extend the result analytically for non-integer v>0.

Since this may be useful for other cases, too, here a detailed account of how one proceeds:

For small integer values of v, one obtains (e.g. using Mathematica):

v=0: P(x)=\delta(x)
v=1: P(x)=q \delta(x)+(1-q)e^{-x}
v=2: P(x)=q^2 \delta(x)+2q(1-q)e^{-x}+q(1-q)^2 x e^{-x}
v=3: P(x)=q^3 \delta(x)+3q^2(1-q)e^{-x}+3q(1-q)^2 x e^{-x}+\frac{1}{2}(1-q)^3 x^2 e^{-x}
v=4: P(x)=q^4 \delta(x)+4q^3(1-q)e^{-x}+6q^2(1-q)^2 x e^{-x}+2q(1-q)^3 x^2 e^{-x}+\frac{1}{6}(1-q)^4 x^3 e^{-x}
v=5: P(x)=q^5 \delta(x)+5q^4(1-q)e^{-x}+10q^3(1-q)^2 x e^{-x}+5q^2(1-q)^3 x^2 e^{-x}+\frac{5}{6}q(1-q)^4 x^3 e^{-x}+\frac{1}{24}(1-q)^5 x^4 e^{-x}

By trying out, one can now guess a general formula for integer v:

P(x)=q^v \delta(x)+e^{-x}\sum_{k=1}^v \frac{1}{(k-1)!}\binom{v}{k}q^{v-k} (1-q)^k x^{k-1}

For example using Mathematica’s Sum command, one can re-write this in terms of a hypergeometric function:

P(x)=q^v \delta(x) + e^{-x}(1-q)q^{-1+v}v \,_1 F_1(1-v,2,-\frac{1-q}{q}x)

This expression now has a well-defined meaning for any v>0, and is the result we looked for. Actually, Mathematica even knows how to take its Laplace transform, which can be used to check its validity.

More tricks for similar problems to come later!

Written by inordinatum

July 28, 2011 at 6:44 pm