## Posts Tagged ‘**mathematica**’

## Simulating Directed Polymers and the Tracy-Widom Distribution

A few posts ago I briefly talked about the problem of a directed polymer in a random medium. In that post, I discussed the scaling behaviour of the polymer fluctuations due to the disordered environment, i.e. its roughness exponent, in dimensions.

As with many other statistical physics problems, for gaining an intuitive understanding it is also useful to perform some numerical simulations. In this post I will show how the directed polymer partition sum can be simulated using Wolfram Mathematica. I will also compare the results to the known exact result for the distribution of the free energy, the Tracy-Widom distribution.

# The transfer-matrix algorithm

The partition sum of the directed polymer is defined as the sum over all directed paths of the Boltzmann weights of each path. The energy of the path is the sum over the random energies of the sites which are touched by . is the inverse temperature.

In any space dimension, this partition sum can be evaluated rapidly by using a **transfer-matrix algorithm**. Let us order the lattice along the longitudinal direction of the directed polymer . Then, the partition sum at site is obtained from the partition sums at this site’s neighbours in the preceding slice :

.

For the free energy , we obtain

.

Let us focus on the case of very low temperatures, . Then the logarithm is dominated by the neighbour with the minimal (i.e. most negative) free energy . Hence we obtain

.

This also makes sense intuitively: At zero temperature, the partition sum of the directed polymer is dominated by the minimal-energy configuration. The “cost” of getting to site is the cheapest “cost” of getting to one of its neighbours, augmented by the (fixed) cost of going to site .

# The `Mathematica`

simulation code

The following short `Mathematica`

routine returns the free energy at the final time for a directed polymer

```
fe =
Compile[{{wd, _Integer}, {len, _Integer}},
Module[{pot1, pot2, fval},
pot1 = RandomVariate[NormalDistribution[], {len, wd}];
pot2 = RandomVariate[NormalDistribution[], {len, wd}];
fval = ConstantArray[0., wd];
Do[fval = (fval - (fval - RotateLeft[fval]) UnitStep[
fval - RotateLeft[fval]]) + pot1[[k]];
fval = (fval - (fval - RotateRight[fval]) UnitStep[
fval - RotateRight[fval]]) + pot2[[k]];, {k, 1, len}];
Return[fval]]];
```

`k`

is the longitudinal coordinate of the polymer, and runs from `1...len`

. The vector `fval`

has `wd`

components and contains the free energy values at the current value of `k`

, for all transversal coordinates `x=1...wd`

.

For each `k`

I perform two steps forward, one where site `i`

neighbours on `i+1`

and `i`

, and one where site `i`

neighbours on `i`

and `i-1`

. This ensures that in the end one obtains a standard square lattice of size `2lenwd`

, where the directed polymer can choose between two symmetric alternatives at each time step. `pot1`

and `pot2`

are `lenwd`

matrices containing the random energies for each site, for the even and odd values of `k`

, respectively.

Since `Mathematica`

‘s built-in `Min`

function does not operate componentwise on vectors, I emulate it by using the componentwise `UnitStep`

function:

`a-(a-b)UnitStep[a-b] = Min[a,b]`

.

At , I use a free initial condition: , i.e. the polymer end at can be anywhere. Modifying the line

`fval = ConstantArray[0., wd];`

,

one can change this e.g. to a fixed initial condition where the polymer end at is fixed.

The returned value is the vector where `x=1...wd`

and `t=2len`

is the final time.

For example, to generate samples of the free energy for a directed polymer on a lattice, with the final end fixed, and the initial end free, you would call

Table[fe[256,128][[1]], {10000}]

Note that the `[[1]]`

picks out the free energy for a fixed final site. This simulation takes about a minute on my laptop.

# The Tracy-Widom distribution of the directed polymer free energy

Theory predicts the following form for the free energy of the directed polymer model in dimensions:

,

where indicates terms subdominant in . is a constant and is the nonuniversal, deterministic part of the free energy. The *fluctuations* of the free energy due to randomness of the disorder, described by the second term, *are universal*: The random variable is distributed according to a Tracy-Widom distribution, and only its amplitude depends on discretisation details.

The precise distribution depends on the symmetries imposed by the initial condition.

- For a polymer free at , , where is the Tracy-Widom distribution for the Gaussian Unitary Ensemble (GUE).
- For a polymer fixed at , , where is the Tracy-Widom distribution for the Gaussian Orthogonal Ensemble (GOE).

The functions and are solutions of Painlevé differential equations. Tables of their numerical values can be found with very high precision e.g. on the web page of M. Prähofer.

In the figure on the right you can see how this theoretical prediction compares to numerical simulations using the code above. To get rid of the nonuniversal constants and , I consider the “normalized” free energy

.

We see that its distribution observed in directed polymer simulations has a skewed shape, which agrees very well with the Tracy-Widom prediction (clearly much better than with the symmetric Gaussian distribution). In the tails, one can even “almost” distinguish the two universality classes corresponding to different initial conditions. For the difference to be really significant, one would need to run some longer simulations though.

# Further optimizations

Naturally one would like to make the simulation even faster, in order to have better statistics and reach higher system sizes. One may think that implementing the code above in a low-level programming language (e.g. `C++`

) would yield a large performance gain. However, it turns out that a naive `C++`

implementation (where the operations on the free-energy vector are implemented by a simple loop over the transversal coordinate ) is actually about three times *slower* than the `Mathematica`

code above. This is since such array-wise operations can be optimized by using modern processor SIMD extensions, which are utilized in `Mathematica`

‘s routines.

Of course, one can also take advantage of this in `C++`

. One can either hand-optimize the loop over transversal coordinates using the processor-dependent SIMD routines, or use an array math library. For example, I implemented the transfer-matrix algorithm above in `C++`

using the vector routines from Intel’s Math Kernel Library, and was able to achieve a speed-up by a factor of five compared to the `Mathematica`

code. Still, considering the ease of programming and debugging, I find that the simple `Mathematica`

implementation fares amazingly well!

If you’re interested in more details, references, higher-dimensional versions, the `C++`

code, or anything related, go ahead and drop me a comment below — I’ll extend this post if there is interest 😉

## Tricks for inverting a Laplace Transform, part II: Products and Convolutions

*EDIT: In the meanwhile, I have continued the series of posts on Laplace Transform inversion. You can find the subsequent articles here: part I (guesses based on series expansions), part III, part IV (substitutions), part V (pole decomposition). Enjoy!*

Following the previous post on inverting Laplace transforms, here is another trick up the same alley. This one actually considers a generalization of the previous case

**Find such that .**

As usual, the built-in InverseLaplaceTransform function from* Mathematica 8* fails to give a result. To obtain a closed formula manually, note that each of the factors can be easily inverted:

has the solution

.

Hence, using the fact that Laplace transforms of convolutions give products, the solution for can be written as a convolution:

.

Computing the integral gives the following expression for in terms of the hypergeometric function :

Enjoy!!

## Tricks for inverting a Laplace Transform, part I: Guesses based on Series

*EDIT: In the meanwhile, I have continued the series of posts on Laplace Transform inversion. You can find the subsequent articles here: part II (products and convolutions), part III, part IV (substitutions), part V (pole decomposition). Enjoy!*

Working with probability distributions one frequently meets the problem of inverting a Laplace transform, which is notoriously difficult. An example that I met recently is the following:

**Find such that .**

The built-in InverseLaplaceTransform function from `Mathematica 8`

fails to give a result. However, a closed formula for can be obtained using a trick: Consider integer , for which the right-hand side becomes a rational function. Take its Laplace inverse, and then extend the result analytically for non-integer .

Since this may be useful for other cases, too, here a detailed account of how one proceeds:

For small integer values of , one obtains (e.g. using Mathematica):

By trying out, one can now guess a general formula for integer :

For example using Mathematica’s `Sum`

command, one can re-write this in terms of a hypergeometric function:

This expression now has a well-defined meaning for any , and is the result we looked for. Actually, Mathematica even knows how to take its Laplace transform, which can be used to check its validity.

More tricks for similar problems to come later!