## Mean-field solution of the Random Field Ising Model

A happy new year to all the blog readers out there!

Today I will discuss a classical model of a disordered system that is extremely simple to write down: The Ising model with quenched (i.e. time-invariant) local random fields.

This model, also dubbed the *Random-Field Ising Model (RFIM)*, is highly non-trivial. Above two dimensions, depending on the strength of the local random fields, it exhibits a phase transition between an ordered (**ferromagnetic**) and a disordered (**paramagnetic**) phase. When applying an additional homogeneous time-varying magnetic field, the local random fields prevent some regions from flipping. This leads to interesting phenomena like avalanches and hysteresis. The avalanche sizes, and the shape of the hysteresis loop, vary depending on the location in the phase diagram. Experimentally, this phenomenology – including the disorder-induced phase transition, avalanches, and hysteresis – describes very well e.g. helium condensation in disordered silica aerogels.

In this blog post, I will discuss the solution of the RFIM in the fully connected limit. This means that the ferromagnetic interaction between any two spins, no matter how far apart, has the same strength. This limit, also called the *mean-field* limit, is supposed to describe the model accurately for a sufficiently high spatial dimension. Just how high is “sufficient” is still a matter of debate – it may be above four dimensions, above six dimensions, or only when the dimension tends to infinity. Nevertheless, the mean-field limit already captures some of the interesting RFIM phenomenology, including the ordered-disordered phase transition. This is what I will discuss in this blog post. I will follow closely the original paper by T. Schneider and E. Pytte from 1977, but I’ve tried to extend it in many places, in order to make the calculations easier to follow.

## 1. The mean-field RFIM Hamiltonian and its free energy

The energy of fully connected Ising spins , , subject to local random fields , is:

The prefactor in the first term (the ferromagnetic interaction energy) is chosen so that the scaling of this term with is the same as the scaling of the second term (containing the random fields).

The corresponding partition sum is the thermal average over all spin configurations:

As usual, is the inverse temperature of the system.

In order to see the phase transition, we need to consider the free energy , averaged over all random fields :

Through a somewhat lengthy calculation (which I explain in detail in the last section 3. of this post), one obtains the following expression for the disorder-averaged free energy:

where the typical magnetization is fixed through the **implicit (self-consistent) equation**

is the probability distribution of the random fields, assumed to be Gaussian with mean zero and variance :

Although the expression above for the free energy is written in terms of , strictly speaking I will show it only for the case where has this Gaussian form. Whether the formula for the free energy also holds for other is a good question which I’d like to understand better – if any expert happens to be reading this and can clarify, it would be great!

Now, let’s look at what information on the phase diagram of the model we can deduce from these expressions.

## 2. Phase diagram of the mean-field RFIM

The self-consistent equation for the magnetization given above has a trivial solution for all values of the parameters . Similar to what happens in the mean-field solution for the standard Ising model, if is sufficiently large, the slope is larger than one and there is another non-trivial solution (see figure on the right). This indicates the existence of ferromagnetic order. Thus, the

**phase boundary between the paramagnetic and the ferromagnetic phase**is given precisely by the condition

One of the three parameters can be eliminated by rescaling . For example, we can eliminate and express the phase transition line in terms of the dimensionless variables , , :

Another way is to scale away , setting , .

Then one has the following expression for the phase boundary

With this expression, we see more clearly that there is a well-defined zero-temperature limit. When , we get a transition at

This is quite interesting, since it indicates a phase transition driven purely by the fluctuations of the quenched disorder, without any thermal noise!

More generally, we can easily plot the phase diagram in terms of using `Mathematica`

; see the figure on the left.

Of course, the expression for the free energy given above can also be used to deduce more elaborate observables, like the susceptibility, etc. For this, I refer to the original paper by T. Schneider and E. Pytte. For example, one can find that near the phase transition the magnetization with a power-law,

This is the typical **mean-field power law**, which is also found when studying the phase transition in the pure Ising model on the mean-field level.

Now that we’re convinced that these formulas for the free energy and the magnetization (boxed in orange above) are useful for understanding the physics of the mean-field RFIM, I will explain in detail how they were derived.

## 3. Calculating the free energy using the replica trick

Since averages over the logarithm of a random variable are not easy to compute, we use the *replica trick* to re-express the free energy in terms of moments of :

is the partition sum of copies, or *replicas*, of the system, feeling the same random fields . Indexing these copies by greek indices , we get

Since the are independent and Gaussian (with mean 0 and variance ), . Then

We have performed the disorder averaging, so what remains is the partition sum of copies/replicas of a *non-disordered* fully-connected Ising model. They are interacting via the second term in the exponential, which came from the disorder averaging. We will now compute the partition sum of this *replicated Hamiltonian*. The plan for this is the following:

- Decouple the individual sites using a Hubbard-Stratonovich transformation.
- Apply the saddle-point method to compute the integral. Due to the decoupling, the saddle-point is controlled by the number of sites . Thus, this procedure is exact in the thermodynamic limit .

To apply the Hubbard-Stratonovich transformation, we first rewrite the ferromagnetic interaction as

and then use

In order to have an exponential scaling homogeneously with the system size, we set :

The exponential is now a linear sum over all sites , without any interaction between them. Thus, the partition sum is the -th power of the partition sum of a single site:

where is the partition sum for all replicas on a single site:

In the thermodynamic limit , the integral in our expression for is dominated by its saddle point (due to the fact that the exponential is linear in ). At the saddle point, by symmetry we can assume all the to be equal. Setting for all , the exponent of the integrand in the above expression for becomes

.

Hence, the saddle-point value is fixed by the implicit equation

where

We see that effectively, the saddle-point location corresponds to a fixed average magnetization , for the replicated spins on a single lattice site, subject to the “Hamiltonian” . Rewriting everything in terms of the magnetization , we get

is fixed by the implicit (self-consistent) equation

Now, to further simplify the result, we can compute the sums over all spin configurations by performing another Hubbard-Stratonovich transformation:

Then, the single-site partition sum can be computed as following:

Similarly, the self-consistent equation for the mean magnetization simplifies to:

In order to obtain the disorder-averaged free energy, as discussed above we need to take the derivative with respect to at :

The magnetization is fixed by the self-consistent equation at :

By the substitution , these formulae reduce to those shown in section 1, and then used to analyze the phase diagram in section 2.

As always, I’m happy about any questions or comments!!

Thanks a lot for your post. It is quite useful.

I think there may be a problem with the implicit determination of above. The saddle point evaluation for each gives me

which is smaller than what you obtained (see also A9 in Schneider and Pytte). If that is correct, then the self-consistent equation for the mean magnetization is also missing a factor of 1/n on the second and third lines, but fortunately that term gets canceled by the factor of that should appear upon taking the derivative with respect to .

If I got it wrong, then I’m really confused with what is going on.

edit: fixed Latex tagsPatrick CharbonneauFebruary 2, 2014 at 10:57 pm

Hi Patrick,

thanks a lot for your interest, and for the detailed reading! I think you are right, there is a factor of 1/n missing in my equation for . My idea was to set first for all , and then perform the optimization with respect to . But then I should have an extra factor on the left-hand side of my self-consistent equation, since the sum over gives times the same term. This then gives the same equation as you propose. I’ll recheck it more carefully in the next days and then update the post.

Of course, with your method of taking the saddle-point equation directly for each of the , without setting them to be equal first, the derivation becomes even easier. I’ll add a remark to that above, thank you very much!

Alex

inordinatumFebruary 3, 2014 at 10:12 pm

I’ve now corrected the factors of as discussed above… If anybody discovers any more questionable manipulations, please let me know!!

inordinatumFebruary 12, 2014 at 9:54 pm

Very very useful. Please upload more on the prerequisite mathematics needed to do research in this direction

romieSeptember 12, 2014 at 12:01 pm

Thanks! For general introductions to this kind of topics, I can recommend the books by M. Kardar (Statistical Physics of Particles and Statistical Physics of Fields) and by A. Altland/B. Simons (Condensed Matter Field Theory). Have fun!

inordinatumSeptember 14, 2014 at 11:02 am

Thanks for the recommendations, I read the stat phys of field by kardar but find it difficult to understand the content on chapter 5 (perturbative RG). It would be helpful for me if you can recommend a good book on the introduction to RG. Thanks in advance.

RomieNovember 13, 2014 at 3:56 pm

Indeed starting off with perturbative RG can be a daunting task 🙂

If you’re still interested I can recommend the book by Cardy (Scaling and Renormalization in Statistical Physics). RG in general is pretty well described in the book by Peskin and Schröder (Introduction to QFT), though that’s more oriented towards particle physics.

Other than that, I found that trying to solve problems is both useful and necessary for gaining an understanding of that topic 🙂

Have fun!

Alexander

inordinatumJanuary 18, 2015 at 7:24 pm