Lecture 1 dealt with equilibrium states, extended to the probability of fluctuations near equilibrium. Biology, however, is permeated necessarily by systems that are out of equilibrium, in which energy is being supplied and transformed. Often this source of energy is comparable to the thermal energy kBT. For example, the hydrolysis of ATP molecules releases around 30 kJmol–1 of energy, which equals around 12 kBT per molecule. This means that at the molecular scale of a single hydrolysis event, the energy available is larger than the thermal energy, but by only one order of magnitude. Thermal effects still have to be accounted for.

There are two well-defined cases of non-equilibrium states: (a) a system that relaxes towards equilibrium; and (b) a system that is constantly driven, and attains a so-called non-equilibrium steady state.

An example of (a) is the diffusion of an ink droplet in water and an example of (b) is convection currents seen in water when it is heated sufficiently from below. A key factor to take into account in non-equilibrium systems is how energy is dissipated, which is linked to the fact that the evolution of a system towards an equilibrium state is generally irreversible.

This lecture begins with diffusion and random walks, and it goes on to consider stochastic motion in a fluid and the Stokes–Einstein relation (Slide 1).


2.1 Diffusion

Diffusion (Slide 2) is an important transport process, occurring extensively in biology, where it may limit the possible rate of nutrient uptake of an organism, or the communication “bandwidth” for signals that rely on the diffusive transport of certain molecules. In 1855, the German physiologist Adolf Fick (1829–1901) proposed a law to describe the process of mass diffusion. He proposed a relation, Fick’s first law, between the flux of material and the concentration gradient. Calling the concentration c, we have in the case of one dimension

Equation 1a
Equation 1a


and more generally (Slide 2),

Equation 1b
Equation 1b


where J is the diffusion flux (with dimensions of [concentration] [area]–1 [time]–1) and D is the diffusion coefficient (dimensions [length]2 [time]–1).


By considering a mass balance equation (i.e. mass is conserved), which in one dimension is

Equation 2
Equation 2


Fick also proposed a second law, which is

Equation 3
Equation 3


In the case where D is a constant (independent of concentration), Equation 3 becomes

Equation 4
Equation 4


This describes the time evolution of the concentration field, under a diffusive process. Of course, being a differential equation, it can only be solved if the boundary conditions are known. Note that there is a more general form of the first law (Equation 1) that says that the flux is proportional to the gradient of chemical potential. This is important in many cases, otherwise a phase-separated system would always diffuse back to a uniform state. In this more general treatment, D is itself a function of concentration, temperature, etc.

Fick’s laws for mass diffusion find application well beyond biological physics, and there are also equations of exactly the same form for heat transfer.

Diffusion may limit and control various processes. In a biological context, it is particularly important to consider the supply of nutrients to organisms, and the motility of small organisms.

2.2 Macromolecules

The diffusion of macromolecules and small particles is a key process in biological materials. Key structures are molecular or, at most, colloidal. For such tiny objects (on the submicron up to the micron scale), the thermal energy, kBT , is effective at keeping the objects in motion, and in randomising their direction. An important question is then how, in such a random environment, there can be well-ordered structures and controlled processes. There are three general strategies:

1. To have a system where the interactions occur with energies much greater than kBT (i.e. a non-thermal system). For example, covalent bonds are not broken by encounters or vibrations with energy of the order kBT. The drawback of such a system is that it is not “soft”, and in order to rearrange it there have to be other mechanisms for breaking these strong bonds.

2. To avoid the “rules” of thermodynamic equilibrium by conditioning the system to be out of equilibrium. The system can be maintained out of equilibrium either (a) on the macroscopic scale (e.g. in an external field such as a temperature gradient, the diffusion of objects has a preferred direction) or (b) on the colloidal scale (e.g. bacteria and molecular motors take in energy by incorporating ATP or other similar molecules, which allows them to avoid reaching thermal equilibrium).

3. To have not one but many particles. Each moves randomly and is subject to unpredictable randomisation by kBT, but the collective statistical behaviour is entirely predictable. This third strategy is applied to the example of Brownian motion.

2.3 Random walks

Scottish botanist Robert Brown (1773–1858) observed in 1828 that objects included within pollen grains (diameter < 1 μm) suspended in water move constantly. He was able to conclude that this motion was not related to life (since he observed it in soot particles as well). The origin of these observations was debated for many decades. Several people proposed the idea that this Brownian motion was caused by collisions between the solvent molecules and the suspended grains, but there was strong opposition to this. It is remarkable that in 1905, Einstein wrote a very important paper on this point, and only then were many convinced – almost a century after Brown’s observations.

Brownian motion is a physical example of a "random walk". A discrete random walk is made of individual steps of length L whose direction is random (Slide 3). Although the motion of individual molecules in a liquid is entirely unpredictable, the average property of many random walks obeys a simple law, as we will demonstrate.


2.3.1 One-dimensional walk

Let us imagine first a one-dimensional case, where the walk is along a line (x-axis). There are numerous applets illustrating one-dimensional random walks, such as this one at the Davidson websphysics site,

http://webphysics.davidson.edu/webtalks/clark/onedimensionalwalk.html.

In one dimension, each step can be in either the positive or the negative direction, so the displacement at step j is kjb, with

kj=±1.

We call the position after j steps xj. The walk starts at the origin, so x0 = 0 and we have

xj=xj–1=kjL.

Equation 5

We can’t know the value of individual positions, xj, but we can make predictions of the average values. For example, since the steps have equal probability of being positive or negative, we have

Equation 6
Equation 6


where the subscript j outside the mean indicates average over all j values. However, the mean-square displacement is not zero,



Let’s calculate what it is,

Equation 7
Equation 7


We can write

Equation 8
Equation 8


because for a given xj–1 it is equally probable to have kj = 1 or kj = –1, so the average over j of these outcomes is zero. In other words, x and k are independent random variables, and the average of their product is the same as the product of their averages.

We see from Equation 7 that the mean square displacement of a walk of j steps is the same as the mean square displacement of a walk of j–1 steps, plus b2. Therefore (using N to indicate a large number),

Equation 9a
Equation 9a


and the root mean square displacement after N steps is

Equation 9b
Equation 9b


If step b is happening every time interval Δt, then the number of steps is

Equation 10
Equation 10


and Equation 9 can be written as

Equation 9c
Equation 9c


with the diffusion coefficient, D, related to the step length and Δt by

Equation 11
Equation 11


The factor 2 in this definition is a convenient convention because of the relation between the distribution of end-to-end distances (Equation 9) and the solution of the diffusion equation (Equation 4; see Question 1).

2.3.2 More than one dimension

We can extend the argument above to a random walk in two and three dimensions. For an applet that draws a two-dimensional random walk, see

http://math.furman.edu/~dcs/java/rw.html.

In three dimensions, the displacement is a three-component vector,

Equation 12
Equation 12


and its mean square length is

Equation 13
Equation 13


because each term on the right is calculated using Equation 9c. Therefore in general for a random walk in d dimensions,

Equation 14
Equation 14


So the average displacement of many random walks does obey a simple law, as promised earlier.

2.4 Stochastic motion in a fluid

2.4.1 Langevin equation

The net acceleration of a colloidal particle of mass m in a liquid is a consequence of a drag force and some other applied force (or forces), F. So we can write the Langevin equation for its motion,

Equation 15
Equation 15


For a sphere, the viscous drag coefficient, ξ, is given by Stokes’s equation,

ξ=6πηR.

Equation 16

Note that the fact that the drag term is proportional to the velocity can be understood from the fact that there are more collisions on the front than on the back of the particle.

Equation 15 (Slide 4) has the solution


Equation 17
Equation 17


Students can be asked (Question 2) to verify that this is a solution by substituting back into Equation 15 (not so trivial). This “solution” is not yet useful, since it has the stochastic force term in the integral and it can’t be integrated. We can, however, build other quantities from Equation 17 that can be evaluated.

Aiming to evaluate the mean kinetic energy, it is useful to consider

Equation 18
Equation 18


Note that these are averages (time or ensemble means) for all possible realisations of the force F but with fixed initial velocity, v0. Therefore they can be “taken into” the time integral.

2.4.2 Stochastic force

To proceed further, more information is needed about the stochastic force term that describes the random thermally generated forces acting on a particle. The simplest form for a random noise is “white noise”, that is, completely uncorrelated noise. This (Slide 5) is expressed as

Equation 19
Equation 19


Equation 20
Equation 20


where the constant φv0 is to be determined.


Note that there are no assumptions here about the square amplitude of the force, which is just the constant φv0. Using these random forces in the Langevin equation (Equation 15), the mean square velocity can be written as

Equation 21
Equation 21


From the equipartition of energy, we know that in the limit of large t, the mean square velocity is

Equation 22
Equation 22


This fixes the constant φv0 in the amplitude of the noise variance, so that

Equation 23
Equation 23


Equation 23 is one form of the fluctuation dissipation theorem: it shows that the amplitude of the fluctuations (the random force) is related to the temperature and to the dissipation (the viscous drag, ξ). This relation has to hold in any system at equilibrium.

Mean square displacement in stochastic motion

With the result from the fluctuation dissipation theorem (Equation 23), the random force is completely characterised. The expression for velocity can be integrated in time to get (Slide 6)

Equation 24
Equation 24


from which taking the square (which is a few lines of algebra, a useful exercise) the mean square displacement can be evaluated,

Equation 25
Equation 25



This is a key result: for long times, the mean square displacement grows linearly with time and is independent of mass.

Stokes–Einstein equation

The result of Equation 25 can be written as

Equation 26
Equation 26


and is important enough to have a name: the Einstein equation.

Equation 26, combined with Stokes’s calculation of drag on a sphere (Equation 16), gives

Equation 27
Equation 27


which is called the Stokes–Einstein equation. This is the formula describing the diffusion coefficient of a spherical object in a fluid. It is interesting to consider the time, τR, that it takes for a particle to diffuse over a length scale equal to its radius,

Equation 28
Equation 28


For the viscosity of water, 1 μm colloidal particles have relaxation timescales in the region of 1 s. Note the strong (cubic) dependence on R.