# inordinatum

Physics and Mathematics of Disordered Systems

## Mean-field solution of the Random Field Ising Model

A happy new year to all the blog readers out there!

Today I will discuss a classical model of a disordered system that is extremely simple to write down: The Ising model with quenched (i.e. time-invariant) local random fields.
This model, also dubbed the Random-Field Ising Model (RFIM), is highly non-trivial. Above two dimensions, depending on the strength of the local random fields, it exhibits a phase transition between an ordered (ferromagnetic) and a disordered (paramagnetic) phase. When applying an additional homogeneous time-varying magnetic field, the local random fields prevent some regions from flipping. This leads to interesting phenomena like avalanches and hysteresis. The avalanche sizes, and the shape of the hysteresis loop, vary depending on the location in the phase diagram. Experimentally, this phenomenology – including the disorder-induced phase transition, avalanches, and hysteresis – describes very well e.g. helium condensation in disordered silica aerogels.

In this blog post, I will discuss the solution of the RFIM in the fully connected limit. This means that the ferromagnetic interaction between any two spins, no matter how far apart, has the same strength. This limit, also called the mean-field limit, is supposed to describe the model accurately for a sufficiently high spatial dimension. Just how high is “sufficient” is still a matter of debate – it may be above four dimensions, above six dimensions, or only when the dimension tends to infinity. Nevertheless, the mean-field limit already captures some of the interesting RFIM phenomenology, including the ordered-disordered phase transition. This is what I will discuss in this blog post. I will follow closely the original paper by T. Schneider and E. Pytte from 1977, but I’ve tried to extend it in many places, in order to make the calculations easier to follow.

## 1. The mean-field RFIM Hamiltonian and its free energy

The energy of $N$ fully connected Ising spins $S_i=\pm 1$, $i=1...N$, subject to local random fields $h_i$, is:

$\displaystyle H[S]=-\frac{J}{N}\sum_{i,j}S_i S_j -\sum_i S_i h_i$

The prefactor $1/N$ in the first term (the ferromagnetic interaction energy) is chosen so that the scaling of this term with $N$ is the same as the scaling of the second term (containing the random fields).

The corresponding partition sum $Z$ is the thermal average over all spin configurations:

$\displaystyle Z=\sum_{\{S_i^\alpha=\pm 1\}} \,e^{-\beta H[S]}$

As usual, $\beta = T^{-1}$ is the inverse temperature of the system.
In order to see the phase transition, we need to consider the free energy $F$, averaged over all random fields $h$:

$\displaystyle \left\langle F\right\rangle_h = -T\left\langle \ln\, Z \right\rangle_h$

Through a somewhat lengthy calculation (which I explain in detail in the last section 3. of this post), one obtains the following expression for the disorder-averaged free energy:

$\displaystyle \left\langle F\right\rangle_h = N\left[J m^2 - T \int \mathrm{d} h\, p(h) \ln 2 \cosh \beta \left(2 J m + h\right) \right]$

where the typical magnetization $m$ is fixed through the implicit (self-consistent) equation

$\displaystyle m = m_{SC}(m):=\int \mathrm{d} h\, p(h) \tanh \beta \left(2 J m + h\right)$

$p(h)$ is the probability distribution of the random fields, assumed to be Gaussian with mean zero and variance $\sigma^2$:

$\displaystyle p(h) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(-\frac{h^2}{2\sigma^2}\right)$

Although the expression above for the free energy is written in terms of $p(h)$, strictly speaking I will show it only for the case where $p(h)$ has this Gaussian form. Whether the formula for the free energy also holds for other $p(h)$ is a good question which I’d like to understand better – if any expert happens to be reading this and can clarify, it would be great!

Now, let’s look at what information on the phase diagram of the model we can deduce from these expressions.

## 2. Phase diagram of the mean-field RFIM

Self-consistent equation for the magnetization of the RFIM. The dashed line indicates the consistency condition $m_{SC}(m)=m$. The red line is an example of $m_{SC}(m)$ in the ferromagnetic phase (here $J=1, T=0.5, \sigma=1$) It has a non-trivial value $m>0$ where it satisfies the self-consistency condition, i.e. intersects the dashed line. On the other hand, the yellow line is an example of $m_{SC}(m)$ in the paramagnetic phase (here $J=1, T=0.5, \sigma=2$). There, the only point satisfying the self-consistency condition is the trivial solution $m=0$.

The self-consistent equation for the magnetization $m$ given above has a trivial solution $0=m_{SC}(0)$ for all values of the parameters $\sigma, J, T$. Similar to what happens in the mean-field solution for the standard Ising model, if $\beta$ is sufficiently large, the slope $m_{SC}'(0)$ is larger than one and there is another non-trivial solution $m>0$ (see figure on the right). This indicates the existence of ferromagnetic order. Thus, the phase boundary between the paramagnetic and the ferromagnetic phase is given precisely by the condition

$\displaystyle \begin{array}{rl} m_{SC}'(m=0) =& 1 \\ \Leftrightarrow 2\beta J \int \mathrm{d} h\, \frac{p(h)}{\left(\cosh \beta h\right)^2} = &1 \end{array}$

One of the three parameters $\beta, J, \sigma$ can be eliminated by rescaling $h$. For example, we can eliminate $\beta$ and express the phase transition line in terms of the dimensionless variables $\tilde{J}:=\beta J$, $\tilde{\sigma}:= \beta \sigma$, $\tilde{h}:=\beta h$:

$\displaystyle 2 \tilde{J}(\tilde{\sigma}) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi\tilde{\sigma}^2}}}\frac{e^{-\frac{1}{2}\tilde{h}^2/\tilde{\sigma}^2}}{\left(\cosh \tilde{h}\right)^2}\right]^{-1}$

Another way is to scale away $\sigma$, setting $\hat{J}:= J/\sigma$, $\hat{\beta}:=\sigma \beta$.

Then one has the following expression for the phase boundary

$\displaystyle 2\hat{J}(\hat{\beta}) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi}}}\frac{e^{-\frac{1}{2}\tilde{h}^2/\hat{\beta}^2}}{\left(\cosh \tilde{h}\right)^2}\right]^{-1}$

With this expression, we see more clearly that there is a well-defined zero-temperature limit. When $\hat{\beta}\to\infty$, we get a transition at

$\displaystyle 2\hat{J}(\infty) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi}}}\frac{1}{\left(\cosh \tilde{h}\right)^2}\right]^{-1} = \sqrt{\pi/2}$

This is quite interesting, since it indicates a phase transition driven purely by the fluctuations of the quenched disorder, without any thermal noise!

Phase diagram of the mean-field RFIM

More generally, we can easily plot the phase diagram in terms of $\hat{J}, \hat{\beta}$ using Mathematica; see the figure on the left.

Of course, the expression for the free energy given above can also be used to deduce more elaborate observables, like the susceptibility, etc. For this, I refer to the original paper by T. Schneider and E. Pytte. For example, one can find that near the phase transition the magnetization with a power-law,

$\displaystyle m(T) \sim (T-T_c)^{1/2}$

This is the typical mean-field power law, which is also found when studying the phase transition in the pure Ising model on the mean-field level.

Now that we’re convinced that these formulas for the free energy and the magnetization (boxed in orange above) are useful for understanding the physics of the mean-field RFIM, I will explain in detail how they were derived.

## 3. Calculating the free energy using the replica trick

Since averages over the logarithm of a random variable are not easy to compute, we use the replica trick to re-express the free energy in terms of moments of $Z$:

$\displaystyle \left\langle F\right\rangle_h = -T \frac{\partial}{\partial n}\Big|_{n=0}\left\langle Z^n \right\rangle_h$

$Z^n$ is the partition sum of $n$ copies, or replicas, of the system, feeling the same random fields $h$. Indexing these copies by greek indices $\alpha=1...n$, we get

$\displaystyle \left\langle Z^n \right\rangle_h = \sum_{\{S_i^\alpha=\pm 1\}} e^{\frac{\beta J}{N} \sum_\alpha \sum_{i, j}S_i^\alpha S_j^\alpha}\left\langle e^{\beta \sum_i \left(\sum_\alpha S_i^\alpha\right) h_i}\right\rangle_h$

Since the $h_i$ are independent and Gaussian (with mean 0 and variance $\sigma^2$), $\left\langle e^{\lambda h_i}\right\rangle_h = \int \mathrm{d}h\,p(h)\,e^{\lambda h} = e^{\sigma^2\lambda^2/2}$. Then

$\displaystyle \left\langle Z^n \right\rangle_h = \sum_{\{S_i^\alpha=\pm 1\}} \exp\left[ \frac{\beta J}{N} \sum_\alpha \sum_{i, j} S_i^\alpha S_j^\alpha + \frac{\beta^2\sigma^2}{2}\sum_i\left(\sum_\alpha S_i^\alpha\right)^2\right]$

We have performed the disorder averaging, so what remains is the partition sum of $n$ copies/replicas of a non-disordered fully-connected Ising model. They are interacting via the second term in the exponential, which came from the disorder averaging. We will now compute the partition sum of this replicated Hamiltonian. The plan for this is the following:

• Decouple the individual sites $i$ using a Hubbard-Stratonovich transformation.
• Apply the saddle-point method to compute the integral. Due to the decoupling, the saddle-point is controlled by the number of sites $N$. Thus, this procedure is exact in the thermodynamic limit $N\to\infty$.

To apply the Hubbard-Stratonovich transformation, we first rewrite the ferromagnetic interaction as

$\displaystyle \sum_\alpha \sum_{i, j} S_i^\alpha S_j^\alpha = \sum_\alpha \left(\sum_i S_i^\alpha \right)^2$

and then use

$\displaystyle \exp\left[\frac{\beta J}{N} \sum_\alpha \left(\sum_i S_i^\alpha \right)^2\right] = \frac{1}{(2\pi)^{n/2}}\int\prod_\alpha\mathrm{d}x_\alpha\,\exp\left[\sum_\alpha \left(x_\alpha \sqrt{2\frac{\beta J}{N}}\sum_i S_i^\alpha - \frac{1}{2}x_\alpha^2\right)\right]$

In order to have an exponential scaling homogeneously with the system size, we set $x = \sqrt{N} \tilde{x}$:

$\displaystyle \left\langle Z^n \right\rangle_h = \left(\frac{N}{2\pi}\right)^{n/2}\sum_{\{S_i^\alpha=\pm 1\}} \int\prod_\alpha\mathrm{d}\tilde{x}_\alpha\,\exp\left[-\frac{N}{2}\sum_\alpha\tilde{x}_\alpha^2 + \sqrt{2\beta J}\sum_i \sum_\alpha \tilde{x}_\alpha S_i^\alpha + \frac{\beta^2\sigma^2}{2}\sum_i\left(\sum_\alpha S_i^\alpha\right)^2\right]$

The exponential is now a linear sum over all sites $i$, without any interaction between them. Thus, the partition sum is the $N$-th power of the partition sum of a single site:

$\displaystyle \left\langle Z^n \right\rangle_h = \left(\frac{N}{2\pi}\right)^{n/2} \int\prod_\alpha\mathrm{d}\tilde{x}_\alpha\,\exp\, N\left[-\frac{1}{2}\sum_\alpha\tilde{x}_\alpha^2 + \ln Z_1(\tilde{x})\right],$

where $Z_1(\tilde{x})$ is the partition sum for all replicas on a single site:

$\displaystyle Z_1(\tilde{x}) = \sum_{\{S^\alpha=\pm 1\}} \exp\left[\sqrt{2\beta J}\sum_\alpha \tilde{x}_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2\right]$

In the thermodynamic limit $N\to\infty$, the $x$ integral in our expression for $\left\langle Z^n \right\rangle_h$ is dominated by its saddle point (due to the fact that the exponential is linear in $N$). At the saddle point, by symmetry we can assume all the $x_\alpha$ to be equal. Setting $\tilde{x}_\alpha := x$ for all $\alpha$, the exponent of the integrand in the above expression for $\displaystyle \left\langle Z^n \right\rangle_h$ becomes

$\displaystyle N\left[-\frac{n}{2}x^2 + \ln Z_1(x)\right], \quad \text{where} \quad Z_1(x) = \sum_{\{S^\alpha=\pm 1\}} \exp\left[\sqrt{2\beta J}x \sum_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2\right]$.

Hence, the saddle-point value $x=x_m$ is fixed by the implicit equation

$\displaystyle \partial_x \big|_{x=x_m}\left[-\frac{n}{2}x^2 + \ln Z_1(x)\right]=0 \Leftrightarrow n\, x_m = \frac{\partial_{x}|_{x=x_m}Z_1(x)}{Z_1(x_m)} = \sqrt{2\beta J}\frac{\sum_{\{S^\alpha=\pm 1\}} \left(\sum_\alpha S^\alpha\right)e^{A[S,x_m]}}{\sum_{\{S^\alpha=\pm 1\}} e^{A[S,x_m]}},$

where

$\displaystyle A[S,x] = \sqrt{2\beta J}\sum_\alpha \tilde{x}_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2.$

We see that effectively, the saddle-point location $x_m$ corresponds to a fixed average magnetization $m := x_m / \sqrt{2\beta J}$, for the replicated spins $S^\alpha$ on a single lattice site, subject to the “Hamiltonian” $A[S,x]$. Rewriting everything in terms of the magnetization $m$, we get

$\displaystyle \left\langle Z^n \right\rangle_h \approx \exp\, N\left[-n\beta J m^2 + \ln Z_1(m)\right],$

$\displaystyle \begin{array}{rl} Z_1(m) = & \sum_{\{S^\alpha= \pm 1\}}e^{A[S,m]} \\ A[S,m] = & 2\beta J m\sum_\alpha S^\alpha + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2 \end{array}$

$m$ is fixed by the implicit (self-consistent) equation

$\displaystyle n \, m = \frac{1}{Z_1(m)}\sum_{\{S^\alpha= \pm 1\}}\left(\sum_\alpha S^\alpha\right)e^{A[S,m]}$

Now, to further simplify the result, we can compute the sums over all spin configurations $\{S^\alpha\}$ by performing another Hubbard-Stratonovich transformation:

$e^{A[S,m]} = \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\exp\left[-\frac{1}{2}s^2 + (2\beta J m + \beta \sigma s)\sum_\alpha S^\alpha\right]$

Then, the single-site partition sum can be computed as following:

$\displaystyle \begin{array}{rl} Z_1(m) = &\sum_{\{S^\alpha= \pm 1\}}e^{A[S,m]} \\ = &\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \prod_{\alpha}\sum_{S^\alpha= \pm 1}e^{(2\beta J m + \beta \sigma s)S^\alpha} \\ = & \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2}\left[2\cosh(2\beta J m + \beta \sigma s)\right]^n \\ = &\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta J m + \beta \sigma s)\right] \end{array}$

Similarly, the self-consistent equation for the mean magnetization simplifies to:

$\displaystyle \begin{array}{rl} n \, m = &\frac{1}{Z_1(m)}\sum_{\{S^\alpha= \pm 1\}}\left(\sum_\alpha S^\alpha\right)e^{A[S,m]} \\ =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \frac{1}{2\beta J }\frac{\partial}{\partial m}\prod_{\alpha}\sum_{S^\alpha= \pm 1}e^{(2\beta Jm + \beta \sigma s)S^\alpha} \\ =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \frac{1}{2\beta J}\frac{\partial}{\partial m} \left[2\cosh(2\beta Jm + \beta \sigma s)\right]^n \\ =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta Jm + \beta \sigma s)\right]\,n\, \tanh (2\beta Jm + \beta \sigma s) \\ \Rightarrow m = & \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta Jm + \beta \sigma s)\right]\tanh (2\beta Jm + \beta \sigma s) \end{array}$

In order to obtain the disorder-averaged free energy, as discussed above we need to take the derivative with respect to $n$ at $n=0$:

$\displaystyle \begin{array}{rl} \left\langle F\right\rangle_h = &-T \frac{\partial}{\partial n}\Big|_{n=0}\left\langle Z^n \right\rangle_h \\ = & N\left[Jm^2 -\frac{\partial}{\partial n}\Big|_{n=0} Z_1(m) \right] \\ = & N\left[Jm^2 - \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \ln 2\cosh(2\beta Jm + \beta \sigma s)\right] \end{array}$

The magnetization $m$ is fixed by the self-consistent equation at $n=0$:

$\displaystyle m = \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2}\tanh (2\beta Jm + \beta \sigma s)$

By the substitution $h := \sigma s$, these formulae reduce to those shown in section 1, and then used to analyze the phase diagram in section 2.

Written by inordinatum

January 20, 2013 at 7:16 pm

### 7 Responses

1. Thanks a lot for your post. It is quite useful.

I think there may be a problem with the implicit determination of $x_m$ above. The saddle point evaluation for each $x_\alpha$ gives me

$\displaystyle x_m = \frac{\partial_{x_\alpha}|_{x_\alpha=x_m}Z_1(x)}{Z_1(x_m)} = \sqrt{2\beta J}\frac{\sum_{\{S^\alpha=\pm 1\}} S^\alpha e^{A[S,x_m]}}{\sum_{\{S^\alpha=\pm 1\}} e^{A[S,x_m]}}$

which is $1/n$ smaller than what you obtained (see also A9 in Schneider and Pytte). If that is correct, then the self-consistent equation for the mean magnetization is also missing a factor of 1/n on the second and third lines, but fortunately that term gets canceled by the factor of $n$ that should appear upon taking the derivative with respect to $m$.

If I got it wrong, then I’m really confused with what is going on.

edit: fixed Latex tags

Patrick Charbonneau

February 2, 2014 at 10:57 pm

• Hi Patrick,

thanks a lot for your interest, and for the detailed reading! I think you are right, there is a factor of 1/n missing in my equation for $x_m$. My idea was to set first $x_\alpha=x$ for all $\alpha$, and then perform the optimization with respect to $x$. But then I should have an extra factor $n$ on the left-hand side of my self-consistent equation, since the sum over $\alpha$ gives $n$ times the same term. This then gives the same equation as you propose. I’ll recheck it more carefully in the next days and then update the post.

Of course, with your method of taking the saddle-point equation directly for each of the $x_\alpha$, without setting them to be equal first, the derivation becomes even easier. I’ll add a remark to that above, thank you very much!

Alex

inordinatum

February 3, 2014 at 10:12 pm

• I’ve now corrected the factors of $n$ as discussed above… If anybody discovers any more questionable manipulations, please let me know!!

inordinatum

February 12, 2014 at 9:54 pm

2. Very very useful. Please upload more on the prerequisite mathematics needed to do research in this direction

romie

September 12, 2014 at 12:01 pm

• Thanks! For general introductions to this kind of topics, I can recommend the books by M. Kardar (Statistical Physics of Particles and Statistical Physics of Fields) and by A. Altland/B. Simons (Condensed Matter Field Theory). Have fun!

inordinatum

September 14, 2014 at 11:02 am

• Thanks for the recommendations, I read the stat phys of field by kardar but find it difficult to understand the content on chapter 5 (perturbative RG). It would be helpful for me if you can recommend a good book on the introduction to RG. Thanks in advance.

Romie

November 13, 2014 at 3:56 pm

• Indeed starting off with perturbative RG can be a daunting task 🙂
If you’re still interested I can recommend the book by Cardy (Scaling and Renormalization in Statistical Physics). RG in general is pretty well described in the book by Peskin and Schröder (Introduction to QFT), though that’s more oriented towards particle physics.

Other than that, I found that trying to solve problems is both useful and necessary for gaining an understanding of that topic 🙂

Have fun!
Alexander

inordinatum

January 18, 2015 at 7:24 pm