Physics and Mathematics of Disordered Systems

Posts Tagged ‘avalanche

Extreme avalanches in the ABBM model

leave a comment »

Extreme value statistics is becoming a popular and well-studied field. Just like the sums of random variables exhibit universality described by the central limit theorem, maxima of random variables also obey a universal distribution, the generalized extreme-value distribution. This is interesting for studying e.g. extreme events in finance (see this paper by McNeil and Frey) and climate statistics (see e.g. this physics paper in Science and this climatology paper).

Having refereed a related paper recently, I’d like to share some insights on the statistics of extreme avalanches in disordered systems. An example are particularly large jumps of the fracture front when slowly breaking a disordered solid. A simple but reasonable model for such avalanches is the Alessandro-Beatrice-Bertotti-Montorsi (ABBM) model, originally invented for describing Barkhausen noise. I already touched upon it in a previous blog post, and will focus on it again in the following.
I’ll show an exact formula for the distribution of the maximum avalanche size in an interval, and connect this result to the universal extreme value distribution when considering a large number of avalanches.

Brief review: The ABBM model

To briefly recap (see my previous post for details), the ABBM model consists in a particle at position u(t) pulled on a random landscape by a spring. A key assumption of the model is that the force resulting from the disordered potential is a Brownian Motion in u. This allows computing many observables exactly.
For example, when the force due to the spring increases by f, it is well-known (see e.g. arxiv:0808.3217 and the citations therein) that the resulting displacement S of the particle follows the distribition

\displaystyle P_f(S) = \frac{f}{2\sqrt{\pi} S^{3/2}}\exp\left[-\frac{(kS-f)^2}{4S}\right].    (1)

Here, k is the spring constant. For simplicity I took a Brownian random force landscape with variance \sigma=1 here, but the results are straightforward to generalize. This result is basically the distribution of the first-passage time of a Brownian motion with drift k at a given level f. In this context it is also known as the Bachelier-Levy formula (see also my post on first-passage times).
For small forces, f \to 0, and weak springs, k \to 0, (1) becomes a power-law distribution P(S) \sim S^{-3/2}.

The largest avalanche in the ABBM model

Now let us consider particularly large avalanches. When applying a force f, the probability to have a total displacement S \leq S_0 is

\displaystyle F^{tot}_f(S_0) := \int_0^{S_0}\mathrm{d}S\,P_f(S).    (2)

Note, however, that the total displacement in this case is the sum of many avalanches triggered by infinitesimal steps in f. So how do we obtain the probability F_f(S_0) that all avalanches triggered during this ramp-up of the force are smaller than S_0? We decompose it in n intervals of size f/n, and let n \to \infty:

\displaystyle F_f(S_0) = \lim_{n\to\infty} \left[F^{tot}_{f/n}(S_0)\right]^n.    (3)

Note that we use here the Markovian property of the Brownian Motion, which ensures that avalanches on disjoint intervals are independent. Only this property permits us to take the n-th power to obtain the cumulative distribution for the n intervals; on any non-Markovian landscape the avalanches in these intervals would be correlated and things would be much more complicated.

Combining equations (1), (2) and (3) we can compute F_f(S_0) explicitly:

\begin{array}{rl}  F_f(S_0) =& \lim_{n\to\infty} \left[F^{tot}_{f/n}(S_0)\right]^n = \exp\left[-f\partial_f\big|_{f=0}\int_{S_0}^{\infty}\mathrm{d}S\,P_f(S)\right] \\  =& \displaystyle \exp\left[-f\left(\frac{e^{-k^2 S_0/4}}{\sqrt{\pi S_0}} - \frac{k}{2}\text{erfc} \frac{k\sqrt{S_0}}{2}\right)\right].  \end{array}    (4)

This satisfies the normalization expected of a cumulative distribution function: For S_0 \to 0, F_f(S_0)\to 0 and for S_0 \to \infty, F_f(S_0)\to 1.
The probability distribution of the maximal avalanche size S_{max} is correspondingly

\displaystyle P_f(S_{max}) = \partial_{S_0}\big|_{S_0=S_{max}}F_f(S_0).    (5)

Eqs. (4) and (5) are a nice closed-form expression for the size distribution of the largest avalanche during the force increase by f!

From few to many avalanches

As one goes to infinitesimal force steps f \to 0, only a single avalanche is triggered. Then it is clear from our construction and eqs. (4), (5) that P_f(S_{max}) \to P_f(S) as defined in (1). So, as expected, when considering a single avalanche the maximal avalanche size and the total avalanche size coincide.

On the other hand, for large force steps f \to \infty, the ramp-up of the force triggers many independent avalanches. The total displacement S is then the sum of many independent avalanche sizes S_1...S_n. Thus, by the central limit theorem, one expects to find a Gaussian distribution for S. We can see this explicitly from (1): The expectation value is \left\langle S \right\rangle = f/k, and the fluctuations around it are \delta S := S-\left\langle S \right\rangle. From (1) one finds that they scale like \delta S \sim \sqrt{f/k^3}. The normalized fluctuations d := \delta S \sqrt{k^3/f} have the distribution

\displaystyle P(d) = \frac{1}{2\sqrt{\pi}}\exp\left(-\frac{d^2}{4}\right) + \mathcal{O}(f^{-1/2}).     (6)

For large f, we are indeed left with a Gaussian distribution for the normalized fluctuations d. This is easily checked numerically, see figure 1 below.

Distribution of the total displacement for increasing force steps.

Figure 1: Distribution of the total displacement S for increasing force steps f, as given in eq. (1). Observe how the distribution converges to the Gaussian limit eq. (6) indicated by black dotted lines.

So what happens with the maximal avalanche size S_{max} for large steps f \to \infty? S_{max} is now the maximum of many independent avalanche sizes S_1...S_n, and as mentioned in the introduction we expect its distribution to be a universal extreme value distribution.
Since only large S_0 are relevant in (4), the exponent can be approximated by

\displaystyle \frac{e^{-k^2 S_0/4}}{\sqrt{\pi S_0}} - \frac{k}{2}\text{erfc} \frac{k\sqrt{S_0}}{2} \approx \frac{4}{k^2 \sqrt{\pi} S_0^{3/2}}e^{-k^2 S_0/4}.    (7)

Inserting this back into (4), we see that for large f the distribution P_f(S_{max}) is centered around S_f := \frac{4}{k^2} \log f. The cumulative distribution function (4) is well approximated by

\displaystyle F_f(S_{max}) \approx \exp\left[- \frac{2}{k^2 \sqrt{\pi} S_f^{3/2}}e^{-k^2 (S_{max}-S_f)/4} \right].    (8)

This is, up to rescaling, the cumulative distribution function of the Gumbel extreme-value distribution e^{-e^{-x}}. It is easy to check this universal asymptotic form numerically, see figure 2 below. Note that here the convergence here is much slower than for the total displacement shown in figure 1, since the typical scale for S_{max} only grows logarithmically with f.

Distribution of the size of the largest avalanche

Figure 2: Distribution of the size of the largest avalanche S_{max} for force steps of increasing size f, as given by eq. (5). Observe how the distribution converges to the universal Gumbel limit in eq. (8) indicated by black dotted lines.

For some applications, the limit of a very soft spring, k \to 0, is important. I leave the details to the reader but the main picture is that the exponential decay for large S_0 in eq. (6) is replaced by a power law S_0^{-1/2}. Correspondingly, the universal extreme-value distribution observed for large force steps f is no longer the Gumbel distribution (8) but instead a Fréchet distribution.

Side note: The minimal avalanche size

One may be tempted to approach similarly the problem of the minimal avalanche size for a slow ramp-up of the applied force. However, this is not well-defined: Due to the roughness of the Brownian force landscape, as we increase the force more and more slowly, the size of the smallest avalanche decreases more and more. Hence, its distribution will always be discretization-dependent and will not yield a finite result such as eq. (4) in the continuum limit.

All this gives a consistent picture of the maximal avalanche in the ABBM model. I find it really nice that it is so simple, knowing the avalanche size distribution (1), to express the distribution of the size of the largest avalanche in closed form and understand how it behaves!

Written by inordinatum

August 19, 2014 at 9:53 pm

Fokker-Planck equation for a jump diffusion process

with 7 comments

One of the simplest stochastic processes is a Brownian motion with drift, or a diffusion process:

\displaystyle   \begin{array}{rl}  \dot{x}(t) =& \mu + \sigma\,\xi(t), \\  x(t) =& \mu t + \sigma \,W(t).  \end{array}   (1)

Here, \xi(t) is Gaussian white noise with mean zero and variance \left\langle \xi(t_1)\xi(t_2)\right\rangle = \delta(t_1-t_2). Its integral W(t) = \int_0^t \mathrm{d} t'\,\xi(t') is a Brownian motion.
Continuous Itô stochastic processes such as eq. (1) are insufficient for applications where the random variable x may jump suddenly (such as in avalanches). A natural extension of (1) for modelling this is a so-called jump-diffusion process. Let us suppose that our jump sizes s are positive, independent and identically distributed with density Q(s). Then, the jump diffusion process y(t) is

\displaystyle   y(t) = x(t) + \sum_{i=0}^{N(t)} s_i = \mu t + \sigma\, W(t) + \sum_{i=0}^{N(t)} s_i,   (2)

where s_i, i\in \mathbb{N} are i.i.d. jump sizes as above, and N(t) is the number of jumps encountered up to time t. For simplicitly, let us assume that jumps occur independently with rate \lambda, i.e. that the probability to have a jump in a time interval \mathrm{d}t is \lambda\, \mathrm{d}t. Then, N(t) is a Poisson process with rate \lambda.

It is well-known that the diffusion process x(t) in (1) is equivalently described by a partial differential equation for the distribution P(x) of x, the Fokker-Planck equation (FPE)

\displaystyle \partial_t P_t(x) = \frac{\sigma^2}{2}\partial_x^2 P_t(x) - \mu \partial_x P_t(x).   (3)

This representation is useful e.g. for first-passage problems: they correspond to various boundaries introduced in the PDE (3). So how does one generalise the Fokker-Planck (3) to the case of the jump-diffusion process y(t) in (2)? I will explain in the following that the answer is

\displaystyle \partial_t P_t(y) = \frac{\sigma^2}{2}\partial_y^2 P_t(y) - \mu \partial_y P_t(y)- \lambda P_t(y) + \lambda\,\int_0^\infty \mathrm{d}s \,Q(s)P_t(y - s) ,   (4)

and then discuss a specific example.

1. Deriving the jump-diffusion FPE

Let us consider a time step from t to t + \mathrm{d}t. The probability for a jump to occur during this interval is \lambda\,\mathrm{d}t, so

\displaystyle P_{t+\mathrm{d}t}(y) = (1-\lambda \mathrm{d}t)\left\langle P_t\left(y-\mu \,\mathrm{d}t - \left[W(t+\mathrm{d}t)-W(t)\right] \right) \right\rangle_W + \lambda \mathrm{d}t\,\left\langle P_t(y-s) \right\rangle_s,   (5)

where \left\langle \cdot \right\rangle_W denotes averaging over all realizations of the Brownian motion W, and \left\langle \cdot \right\rangle_s denotes averaging over the distribution Q(s) of the jump size s. Since the jump term is already multiplied by the jump probability \propto \mathrm{d}t, the drift and noise contributions there are of higher order in \mathrm{d}t and were dropped.

The averaging over the jump size in (5) yields a convolution with the jump size distribution Q(s):

\displaystyle \left\langle P_t(y-s) \right\rangle_s = \int_0^\infty \mathrm{d}s \,Q(s)P_t(y - s).

The average over the noise W in (5) is the same as for standard diffusion. During the interval \mathrm{d}t, the increment of the noise term in (2) is

\displaystyle W(t+\mathrm{d}t)-W(t) = \int_t^{t+\mathrm{d}t}\mathrm{d}t'\,\xi(t') =: V\sqrt{\mathrm{d}t},

where the last equality is a definition for V.
Since W is a Brownian motion, V is normally distributed:

\displaystyle \mathcal{N}(V) = \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{V^2}{2}\right).

Thus, the average over W in (5) is

\displaystyle  \begin{array}{rl}  \displaystyle \left\langle P_t\left(y-\mu \,\mathrm{d}t - \left[W(t+\mathrm{d}t)-W(t)\right] \right) \right\rangle_W = & \displaystyle \int_{-\infty}^\infty \mathrm{d}V\,\mathcal{N}(V)\, P_t\left(y-\mu \,\mathrm{d}t - V \sqrt{\mathrm{d}t} \right) \\   = & \displaystyle P_t(y) - \mu\, \mathrm{d}t\, \partial_y P_t(y)- \sqrt{\mathrm{d}t} \partial_y P_t(y) \int_{-\infty}^\infty \mathrm{d}V\, \mathcal{N}(V)\, V  \\  & \displaystyle + \frac{1}{2}\mathrm{d}t \partial_y^2 P_t(y)\, \int_{-\infty}^\infty \mathrm{d}V\, \mathcal{N}(V)\, V^2 + \mathcal{O}(\mathrm{d}t)^2 \\  =& \displaystyle P_t(y) - \mu\, \mathrm{d}t\, \partial_y P_t(y) + \frac{1}{2}\mathrm{d}t\,\partial_y^2 P_t(y)  \end{array} .

In the last line we used the fact that the normal distribution has mean zero and variance 1, and dropped all terms of higher order than \mathrm{d}t. Inserting all this into (5), we obtain the jump-diffusion FPE (4).

2. Example: Exponential jumps

Now let us consider a simple example, where the jump sizes are distributed exponentially:

\displaystyle  Q(s) = \alpha\,\exp(-\alpha s),\quad\quad \alpha > 0  .

We will compute the distribution of y(t), starting from y(0)=0.
For this, we solve the jump-diffusion FPE (4) with the initial condition P_{t=0}(y) = \delta(y). Let us take a Fourier transform of (4) from y to k:

\begin{array}{rl}  \displaystyle  \tilde{P}_t(k) := & \displaystyle \int_{-\infty}^\infty \mathrm{d}y\,e^{i k y}\,P_t(y), \\  \displaystyle  \partial_t \tilde{P}_t(k) = & \displaystyle -\frac{\sigma^2}{2}k^2\,\tilde{P}_t(k) + i\mu k\, \tilde{P}_t(k) - \lambda \tilde{P}_t(k) + \lambda\tilde{Q}(k)\tilde{P}_t(k)  \end{array}  .

The different spatial modes decouple, and instead of a partial differential equation we obtain an ordinary differential equation for each value of the Fourier variable k.
For the above exponential form of the jump size distribution Q,

\displaystyle  \tilde{Q}(k) = \frac{\alpha}{\alpha - i k}  .

Furthermore, the initial condition P_{t=0}(y) = \delta(y) gives \tilde{P}_{t=0}(k) = 1. Hence, the solution $\tilde{P}$ reads

\displaystyle  \tilde{P}_t(k) = \exp\left[ \left( -\frac{\sigma^2}{2}k^2+  i\mu k + i\frac{\lambda k}{\alpha - i k} \right) t\right]  .

While it does not seem feasible to invert this Fourier transform to obtain a closed expression for P_t(y) (but if anyone has an idea, let me know!), \tilde{P}_t(y) is already enough to determine the moments of y(t). Taking derivatives, we obtain for example

\displaystyle  \begin{array}{rl}  \left\langle y(t) \right\rangle = & -i\partial_k \big|_{k=0} \tilde{P}_t(k) = \left(\frac{\lambda}{\alpha}+\mu\right)t \\  \left\langle y(t)^2 \right\rangle^c = & -\partial_k^2 \big|_{k=0} \tilde{P}_t(k) + \left[\partial_k \big|_{k=0} \tilde{P}_t(k)\right]^2  \\  = & \left(\frac{2\lambda}{\alpha^2}+\sigma^2\right)t, \\  ... &  \end{array}  .

Similarly, solving the FPE (4) with an absorbing boundary allows computing first-passage times (or at least their moments) for our jump-diffusion process.

Have fun, and do let me know if you have any comments!

Written by inordinatum

November 20, 2013 at 8:04 pm

Mean-field solution of the Random Field Ising Model

with 7 comments

A happy new year to all the blog readers out there!

Today I will discuss a classical model of a disordered system that is extremely simple to write down: The Ising model with quenched (i.e. time-invariant) local random fields.
This model, also dubbed the Random-Field Ising Model (RFIM), is highly non-trivial. Above two dimensions, depending on the strength of the local random fields, it exhibits a phase transition between an ordered (ferromagnetic) and a disordered (paramagnetic) phase. When applying an additional homogeneous time-varying magnetic field, the local random fields prevent some regions from flipping. This leads to interesting phenomena like avalanches and hysteresis. The avalanche sizes, and the shape of the hysteresis loop, vary depending on the location in the phase diagram. Experimentally, this phenomenology – including the disorder-induced phase transition, avalanches, and hysteresis – describes very well e.g. helium condensation in disordered silica aerogels.

In this blog post, I will discuss the solution of the RFIM in the fully connected limit. This means that the ferromagnetic interaction between any two spins, no matter how far apart, has the same strength. This limit, also called the mean-field limit, is supposed to describe the model accurately for a sufficiently high spatial dimension. Just how high is “sufficient” is still a matter of debate – it may be above four dimensions, above six dimensions, or only when the dimension tends to infinity. Nevertheless, the mean-field limit already captures some of the interesting RFIM phenomenology, including the ordered-disordered phase transition. This is what I will discuss in this blog post. I will follow closely the original paper by T. Schneider and E. Pytte from 1977, but I’ve tried to extend it in many places, in order to make the calculations easier to follow.

1. The mean-field RFIM Hamiltonian and its free energy

The energy of N fully connected Ising spins S_i=\pm 1, i=1...N, subject to local random fields h_i, is:

\displaystyle H[S]=-\frac{J}{N}\sum_{i,j}S_i S_j -\sum_i S_i h_i

The prefactor 1/N in the first term (the ferromagnetic interaction energy) is chosen so that the scaling of this term with N is the same as the scaling of the second term (containing the random fields).

The corresponding partition sum Z is the thermal average over all spin configurations:

\displaystyle Z=\sum_{\{S_i^\alpha=\pm 1\}} \,e^{-\beta H[S]}

As usual, \beta = T^{-1} is the inverse temperature of the system.
In order to see the phase transition, we need to consider the free energy F, averaged over all random fields h:

\displaystyle \left\langle F\right\rangle_h = -T\left\langle \ln\, Z \right\rangle_h

Through a somewhat lengthy calculation (which I explain in detail in the last section 3. of this post), one obtains the following expression for the disorder-averaged free energy:

\displaystyle \left\langle F\right\rangle_h = N\left[J m^2 - T \int \mathrm{d} h\, p(h) \ln 2 \cosh \beta \left(2 J m + h\right) \right]

where the typical magnetization m is fixed through the implicit (self-consistent) equation

\displaystyle m = m_{SC}(m):=\int \mathrm{d} h\, p(h) \tanh \beta \left(2 J m + h\right)

p(h) is the probability distribution of the random fields, assumed to be Gaussian with mean zero and variance \sigma^2:

\displaystyle p(h) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(-\frac{h^2}{2\sigma^2}\right)

Although the expression above for the free energy is written in terms of p(h), strictly speaking I will show it only for the case where p(h) has this Gaussian form. Whether the formula for the free energy also holds for other p(h) is a good question which I’d like to understand better – if any expert happens to be reading this and can clarify, it would be great!

Now, let’s look at what information on the phase diagram of the model we can deduce from these expressions.

2. Phase diagram of the mean-field RFIM

Self-consistent equation for the magnetization of the RFIM

Self-consistent equation for the magnetization of the RFIM. The dashed line indicates the consistency condition m_{SC}(m)=m. The red line is an example of m_{SC}(m) in the ferromagnetic phase (here J=1, T=0.5, \sigma=1) It has a non-trivial value m>0 where it satisfies the self-consistency condition, i.e. intersects the dashed line. On the other hand, the yellow line is an example of m_{SC}(m) in the paramagnetic phase (here J=1, T=0.5, \sigma=2). There, the only point satisfying the self-consistency condition is the trivial solution m=0.

The self-consistent equation for the magnetization m given above has a trivial solution 0=m_{SC}(0) for all values of the parameters \sigma, J, T. Similar to what happens in the mean-field solution for the standard Ising model, if \beta is sufficiently large, the slope m_{SC}'(0) is larger than one and there is another non-trivial solution m>0 (see figure on the right). This indicates the existence of ferromagnetic order. Thus, the phase boundary between the paramagnetic and the ferromagnetic phase is given precisely by the condition

\displaystyle   \begin{array}{rl}  m_{SC}'(m=0) =& 1 \\  \Leftrightarrow 2\beta J \int \mathrm{d} h\, \frac{p(h)}{\left(\cosh \beta h\right)^2} = &1   \end{array}

One of the three parameters \beta, J, \sigma can be eliminated by rescaling h. For example, we can eliminate \beta and express the phase transition line in terms of the dimensionless variables \tilde{J}:=\beta J, \tilde{\sigma}:= \beta \sigma, \tilde{h}:=\beta h:

\displaystyle 2 \tilde{J}(\tilde{\sigma}) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi\tilde{\sigma}^2}}}\frac{e^{-\frac{1}{2}\tilde{h}^2/\tilde{\sigma}^2}}{\left(\cosh \tilde{h}\right)^2}\right]^{-1}

Another way is to scale away \sigma, setting \hat{J}:= J/\sigma, \hat{\beta}:=\sigma \beta.

Then one has the following expression for the phase boundary

\displaystyle 2\hat{J}(\hat{\beta}) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi}}}\frac{e^{-\frac{1}{2}\tilde{h}^2/\hat{\beta}^2}}{\left(\cosh \tilde{h}\right)^2}\right]^{-1}

With this expression, we see more clearly that there is a well-defined zero-temperature limit. When \hat{\beta}\to\infty, we get a transition at

\displaystyle 2\hat{J}(\infty) = \left[\int{\frac{\mathrm{d}\tilde{h}}{\sqrt{2\pi}}}\frac{1}{\left(\cosh \tilde{h}\right)^2}\right]^{-1} = \sqrt{\pi/2}

This is quite interesting, since it indicates a phase transition driven purely by the fluctuations of the quenched disorder, without any thermal noise!

Phase diagram of the mean-field RFIM

Phase diagram of the mean-field RFIM

More generally, we can easily plot the phase diagram in terms of \hat{J}, \hat{\beta} using Mathematica; see the figure on the left.

Of course, the expression for the free energy given above can also be used to deduce more elaborate observables, like the susceptibility, etc. For this, I refer to the original paper by T. Schneider and E. Pytte. For example, one can find that near the phase transition the magnetization with a power-law,

\displaystyle m(T) \sim (T-T_c)^{1/2}

This is the typical mean-field power law, which is also found when studying the phase transition in the pure Ising model on the mean-field level.

Now that we’re convinced that these formulas for the free energy and the magnetization (boxed in orange above) are useful for understanding the physics of the mean-field RFIM, I will explain in detail how they were derived.

3. Calculating the free energy using the replica trick

Since averages over the logarithm of a random variable are not easy to compute, we use the replica trick to re-express the free energy in terms of moments of Z:

\displaystyle \left\langle F\right\rangle_h = -T \frac{\partial}{\partial n}\Big|_{n=0}\left\langle Z^n \right\rangle_h

Z^n is the partition sum of n copies, or replicas, of the system, feeling the same random fields h. Indexing these copies by greek indices \alpha=1...n, we get

\displaystyle \left\langle Z^n \right\rangle_h = \sum_{\{S_i^\alpha=\pm 1\}} e^{\frac{\beta J}{N} \sum_\alpha \sum_{i, j}S_i^\alpha S_j^\alpha}\left\langle e^{\beta \sum_i \left(\sum_\alpha S_i^\alpha\right) h_i}\right\rangle_h

Since the h_i are independent and Gaussian (with mean 0 and variance \sigma^2), \left\langle e^{\lambda h_i}\right\rangle_h = \int \mathrm{d}h\,p(h)\,e^{\lambda h} = e^{\sigma^2\lambda^2/2}. Then

\displaystyle \left\langle Z^n \right\rangle_h = \sum_{\{S_i^\alpha=\pm 1\}} \exp\left[ \frac{\beta J}{N} \sum_\alpha \sum_{i, j} S_i^\alpha S_j^\alpha + \frac{\beta^2\sigma^2}{2}\sum_i\left(\sum_\alpha S_i^\alpha\right)^2\right]

We have performed the disorder averaging, so what remains is the partition sum of n copies/replicas of a non-disordered fully-connected Ising model. They are interacting via the second term in the exponential, which came from the disorder averaging. We will now compute the partition sum of this replicated Hamiltonian. The plan for this is the following:

  • Decouple the individual sites i using a Hubbard-Stratonovich transformation.
  • Apply the saddle-point method to compute the integral. Due to the decoupling, the saddle-point is controlled by the number of sites N. Thus, this procedure is exact in the thermodynamic limit N\to\infty.

To apply the Hubbard-Stratonovich transformation, we first rewrite the ferromagnetic interaction as

\displaystyle \sum_\alpha \sum_{i, j} S_i^\alpha S_j^\alpha = \sum_\alpha \left(\sum_i S_i^\alpha \right)^2

and then use

\displaystyle \exp\left[\frac{\beta J}{N} \sum_\alpha \left(\sum_i S_i^\alpha \right)^2\right] = \frac{1}{(2\pi)^{n/2}}\int\prod_\alpha\mathrm{d}x_\alpha\,\exp\left[\sum_\alpha \left(x_\alpha \sqrt{2\frac{\beta J}{N}}\sum_i S_i^\alpha - \frac{1}{2}x_\alpha^2\right)\right]

In order to have an exponential scaling homogeneously with the system size, we set x = \sqrt{N} \tilde{x}:

\displaystyle \left\langle Z^n \right\rangle_h = \left(\frac{N}{2\pi}\right)^{n/2}\sum_{\{S_i^\alpha=\pm 1\}} \int\prod_\alpha\mathrm{d}\tilde{x}_\alpha\,\exp\left[-\frac{N}{2}\sum_\alpha\tilde{x}_\alpha^2 + \sqrt{2\beta J}\sum_i \sum_\alpha \tilde{x}_\alpha S_i^\alpha  + \frac{\beta^2\sigma^2}{2}\sum_i\left(\sum_\alpha S_i^\alpha\right)^2\right]

The exponential is now a linear sum over all sites i, without any interaction between them. Thus, the partition sum is the N-th power of the partition sum of a single site:

\displaystyle \left\langle Z^n \right\rangle_h = \left(\frac{N}{2\pi}\right)^{n/2} \int\prod_\alpha\mathrm{d}\tilde{x}_\alpha\,\exp\, N\left[-\frac{1}{2}\sum_\alpha\tilde{x}_\alpha^2 + \ln Z_1(\tilde{x})\right],

where Z_1(\tilde{x}) is the partition sum for all replicas on a single site:

\displaystyle Z_1(\tilde{x}) = \sum_{\{S^\alpha=\pm 1\}} \exp\left[\sqrt{2\beta J}\sum_\alpha \tilde{x}_\alpha S^\alpha  + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2\right]

In the thermodynamic limit N\to\infty, the x integral in our expression for \left\langle Z^n \right\rangle_h is dominated by its saddle point (due to the fact that the exponential is linear in N). At the saddle point, by symmetry we can assume all the x_\alpha to be equal. Setting \tilde{x}_\alpha := x for all \alpha, the exponent of the integrand in the above expression for \displaystyle \left\langle Z^n \right\rangle_h becomes

\displaystyle N\left[-\frac{n}{2}x^2 + \ln Z_1(x)\right], \quad \text{where} \quad    Z_1(x) = \sum_{\{S^\alpha=\pm 1\}} \exp\left[\sqrt{2\beta J}x \sum_\alpha S^\alpha  + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2\right].

Hence, the saddle-point value x=x_m is fixed by the implicit equation

\displaystyle \partial_x \big|_{x=x_m}\left[-\frac{n}{2}x^2 + \ln Z_1(x)\right]=0 \Leftrightarrow n\, x_m = \frac{\partial_{x}|_{x=x_m}Z_1(x)}{Z_1(x_m)} = \sqrt{2\beta J}\frac{\sum_{\{S^\alpha=\pm 1\}} \left(\sum_\alpha S^\alpha\right)e^{A[S,x_m]}}{\sum_{\{S^\alpha=\pm 1\}} e^{A[S,x_m]}},


\displaystyle A[S,x] = \sqrt{2\beta J}\sum_\alpha \tilde{x}_\alpha S^\alpha  + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2.

We see that effectively, the saddle-point location x_m corresponds to a fixed average magnetization m := x_m / \sqrt{2\beta J}, for the replicated spins S^\alpha on a single lattice site, subject to the “Hamiltonian” A[S,x]. Rewriting everything in terms of the magnetization m, we get

\displaystyle \left\langle Z^n \right\rangle_h \approx \exp\, N\left[-n\beta J m^2 + \ln Z_1(m)\right],

\displaystyle   \begin{array}{rl}  Z_1(m) = & \sum_{\{S^\alpha= \pm 1\}}e^{A[S,m]} \\  A[S,m] = & 2\beta J m\sum_\alpha S^\alpha  + \frac{\beta^2\sigma^2}{2}\left(\sum_\alpha S^\alpha\right)^2 \end{array}

m is fixed by the implicit (self-consistent) equation

\displaystyle n \, m = \frac{1}{Z_1(m)}\sum_{\{S^\alpha= \pm 1\}}\left(\sum_\alpha S^\alpha\right)e^{A[S,m]}

Now, to further simplify the result, we can compute the sums over all spin configurations \{S^\alpha\} by performing another Hubbard-Stratonovich transformation:

e^{A[S,m]} = \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\exp\left[-\frac{1}{2}s^2 + (2\beta J m + \beta \sigma s)\sum_\alpha S^\alpha\right]

Then, the single-site partition sum can be computed as following:

\displaystyle   \begin{array}{rl}  Z_1(m) = &\sum_{\{S^\alpha= \pm 1\}}e^{A[S,m]} \\  = &\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \prod_{\alpha}\sum_{S^\alpha= \pm 1}e^{(2\beta J m + \beta \sigma s)S^\alpha} \\  = & \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2}\left[2\cosh(2\beta J m + \beta \sigma s)\right]^n   \\  = &\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta J m + \beta \sigma s)\right]   \end{array}

Similarly, the self-consistent equation for the mean magnetization simplifies to:

\displaystyle   \begin{array}{rl}  n \, m = &\frac{1}{Z_1(m)}\sum_{\{S^\alpha= \pm 1\}}\left(\sum_\alpha S^\alpha\right)e^{A[S,m]} \\  =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \frac{1}{2\beta J }\frac{\partial}{\partial m}\prod_{\alpha}\sum_{S^\alpha= \pm 1}e^{(2\beta Jm + \beta \sigma s)S^\alpha}    \\  =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \frac{1}{2\beta J}\frac{\partial}{\partial m} \left[2\cosh(2\beta Jm + \beta \sigma s)\right]^n   \\  =& \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta Jm + \beta \sigma s)\right]\,n\, \tanh (2\beta Jm + \beta \sigma s) \\  \Rightarrow m = & \frac{1}{Z_1(m)}\int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, \exp\left[-\frac{1}{2}s^2 + n \ln 2\cosh(2\beta Jm + \beta \sigma s)\right]\tanh (2\beta Jm + \beta \sigma s)  \end{array}

In order to obtain the disorder-averaged free energy, as discussed above we need to take the derivative with respect to n at n=0:

\displaystyle   \begin{array}{rl}  \left\langle F\right\rangle_h = &-T \frac{\partial}{\partial n}\Big|_{n=0}\left\langle Z^n \right\rangle_h \\  = & N\left[Jm^2 -\frac{\partial}{\partial n}\Big|_{n=0} Z_1(m) \right] \\  = & N\left[Jm^2 - \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2} \ln 2\cosh(2\beta Jm + \beta \sigma s)\right]   \end{array}

The magnetization m is fixed by the self-consistent equation at n=0:

\displaystyle   m = \int \frac{\mathrm{d} s}{\sqrt{2\pi}}\, e^{-\frac{1}{2}s^2}\tanh (2\beta Jm + \beta \sigma s)

By the substitution h := \sigma s, these formulae reduce to those shown in section 1, and then used to analyze the phase diagram in section 2.

As always, I’m happy about any questions or comments!!

Written by inordinatum

January 20, 2013 at 7:16 pm

The Alessandro-Beatrice-Bertotti-Montorsi model

with 2 comments

When a magnet is submitted to a slowly varying external magnetic field, its magnetization changes not smoothly, but in discrete jumps. These avalanches can be made audible using an induction coil. The resulting crackling signal is called Barkhausen noise. By analysing various features of this signal one can deduce information on material properties, for example residual stress or defect sizes, which is important for applications such as non-destructive testing. In this post, I will discuss a simple model describing the physics of Barkhausen noise. I will explain some of its predictions, including the stationary signal distribution, and sizes and durations of the avalanches.

Stochastic differential equation of the ABBM model

Example of signal generated by ABBM model

Example of signal generated by ABBM model

As you probably know, a ferromagnetic material with zero net magnetization consists of many magnetic domains. Inside one domain, the spins are oriented in parallel (thus each domain has a non-vanishing magnetization), however the magnetizations of different domains are randomly aligned and cancel out on average.
We will be interested in so-called soft magnets. In these materials domain walls can move quite freely, until they encounter a defect. This means they have a wide hysteresis loop. The dominant mechanism for magnetization is the motion of domain walls (and not the changing of magnetization inside one domain, as for hard magnets).

Alessandro, Beatrice, Bertotti and Montorsi (for details, see ref. [1]) model the change in magnetization under an external field through the motion of a single domain wall transverse to the magnetic field. They propose the following stochastic differential equation for the domain wall position u(t):

\displaystyle \Gamma \partial_t u(t) = H(t) -k u(t) + F(u(t))

Here, H(t) is the external applied magnetic field, and \Gamma is the domain wall relaxation time. The term -ku(t) is the so-called demagnetizing field, which keeps the domain wall from moving indefinitely. F(u) is a random pinning force, which depends on the position of the particle u(t). In the ABBM model, it is assumed to be a realization of Brownian motion. For more details on the motivation of the individual contributions, see e.g. the review in ref. [2].

Note that the random pinning force is quenched, i.e. depends on the particle position u(t) and not directly on the time t. A time-dependent random force would be a model for thermal noise (instead of localized defects).

Simulating the stochastic differential equation above yields a trajectory (see the figure on the right) which is very similar to the results of Barkhausen noise measurements. Due to the specific properties of Brownian motion, the ABBM model is easy to treat analytically. I will now discuss several observables which can be computed analytically: The stationary distribution of domain wall velocities, and the distributions of avalanche sizes and durations.

Stationary domain wall velocity distribution

To obtain a stationary state of domain wall motion, one ramps up the external field H linearly:
\displaystyle H(t) = c t.
Then, the instantaneous domain wall velocity \dot{u} has a stationary distribution given by
\displaystyle P(\dot{u}) = \frac{e^{-\dot{u}}\dot{u}^{-1+c}}{\Gamma(c)}.
Here I use dimensionless units k = \Gamma = 1. One way to derive this result is by solving the Fokker-Planck equation associated to the SDE above (as was done by ABBM in ref. [1]).
This distribution is interesting since it exhibits two different kinds of behaviour: For c < 1, P(\dot{u}=0)=\infty, meaning that the domain wall is pinned at near zero velocity most of the time. On the other hand, for c > 1, P(\dot{u}=0)=0 and the motion is smooth.
In the following I will focus on the case c<1, where we have intermittent avalanches.

Avalanche statistics

One way to obtain information on avalanches in the ABBM model is mapping the SDE above to a path integral (the Martin-Siggia-Rose formalism). This is done e.g. in references [3] and [4] below, where the resulting path integral is solved for any external field H(t). Probably the simplest way to define an avalanche is to apply a step-like field, H(t) = w \theta(t). The instantaneous increase at t=0 from H=0 to H=w triggers precisely one avalanche. Its size S and duration T are distributed according to (again in dimensionless units):
\displaystyle P(S) = \frac{w}{\sqrt{4\pi}S^{3/2}}e^{-\frac{(w-S)^2}{4S}},
\displaystyle P(T) = \frac{w}{(2\sinh T/2)^2}e^{-\frac{w}{e^T-1}}.
For small w and small avalanches, these distributions display a large power-law regime where one has
P(S) \sim S^{-3/2}, \quad P(T) \sim T^{-2}.
These power laws indicate that avalanches in the ABBM model are scale-free: There are both extremely small and extremely large ones (between the microscopic cutoff scale given by w and the macroscopic cutoff scale given by k=1).

Universality (or the lack of it)

The exponents of the power-law regimes in P(S) and P(T) above are universal for mean-field models of elastic interfaces. They do not depend on material properties or on details of the dynamics, but only on the fact that one has sufficiently long range interactions between different parts of the interface. These exponents are well-verified experimentally for magnets falling into the mean-field universality class.

Being universal, they also apply to other elastic interfaces with long-range interactions: Some even argue that the P(S) \sim S^{-3/2} behaviour is related to the Gutenberg-Richter distribution of earthquake moments. The avalanche size S would correspond to the earthquake moment, related to its magnitude M (the number reported in newspapers) via M \propto \log_{10} S. The exponent 3/2 for P(S) would give a Gutenberg-Richter b-value of b\approx 0.75, which is not too far off from the observed one.

On the other hand, I find it a little overly simplistic to try and find universal aspects of completely disparate physical systems. We know after all, that motion of magnetic domain walls and of earthquakes is not the same thing — so maybe the more interesting physics is in their differences, rather than their similarities.
A more detailed analysis of avalanches thus requires going beyond just power-law exponents. Several more refined observables — like mean shapes of avalanches — have been proposed to that end. It has been shown (see ref. [5]) that they are sensitive to the details of the dynamics. In my view, the interesting question (which is still not completely answered) is: What features should one look at, in order to determine if a signal is Barkhausen noise or something else? What can one learn from it about the microscopic disorder in one particular sample by listening to the Barkhausen noise it emits?

Outlook and References

If there is interest, in the future I may extend this blog post to a Wikipedia article, since I believe the model is simple but frequently used. It is still a field of active research, thus the list of references is certainly incomplete. Let me know if you want to add anything!

[1] B. Alessandro, C. Beatrice, G. Bertotti, and A. Montorsi, “Domain-wall dynamics and Barkhausen effect in metallic ferromagnetic materials. I. Theory,” J. Appl. Phys., vol. 68, no. 6, p. 2901, 1990.

[2] F. Colaiori, “Exactly solvable model of avalanches dynamics for Barkhausen crackling noise,” Adv. Phys., vol. 57, no. 4, p. 287, Jul. 2008.

[3] P. Le Doussal and K. J. Wiese, “Distribution of velocities in an avalanche,” Europhys. Lett., vol. 97, no. 4, p. 46004, Apr. 2012.

[4] A. Dobrinevski, P. Le Doussal, and K. J. Wiese, “Nonstationary dynamics of the Alessandro-Beatrice-Bertotti-Montorsi model,” Phys. Rev. E, vol. 85, no. 3, p. 18, Mar. 2012.

[5] S. Zapperi, C. Castellano, F. Colaiori, and G. Durin, “Signature of effective mass in crackling-noise asymmetry,” Nature Physics, vol. 1, no. 1, p. 46, Oct. 2005.

Written by inordinatum

October 11, 2012 at 8:37 pm