In these lectures, we will be interested in systems with large numbers of components. In such a system, a direct calculation of its properties is not possible. Statistical mechanics provides methods for the accurate calculation of the probabilities of finding a system in various conformations. Also known as statistical thermodynamics (Slide 1), it is the framework for deducing macroscopic properties starting from a detailed microscopic theory. The image in Slide 1 shows charged colloidal particles spontaneously forming an ordered phase on increasing the concentration; the scale bars are 10 μm. Statistical mechanics is particularly important in describing soft condensed matter, i.e. systems such as polymer solutions, colloidal suspensions, gels, foams, liquid emulsions, surfactant assemblies and granular materials. These materials, as well as being the basis of various industrially important products, are the building blocks of biological systems. Biological processes make extensive “use” of soft matter physics.

The statistical mechanics framework can be used to explain processes that take place at the molecular and cellular levels in biology. For example, oxygen is bound to hemoglobin in red blood cells when these are in oxygen-rich environments, and it is released only when the oxygen level in the environment falls below some threshold, typically where it is needed in tissues. The binding of certain proteins (called transcription regulator factors) to DNA is at the heart of how a cell decides to produce the proteins coded in its DNA (including more or less of the transcription regulator factors themselves). This equilibrium of many association events, taking place in a liquid environment where thermal noise and Brownian motion are present, can best be described statistically. More generally, statistical mechanics can be used to calculate properties such as the miscibility of two fluids, or the elasticity of rubbers (including a cell’s cytoskeleton), starting from simple assumptions about the geometry and interactions between components.

1.1 Thermodynamic functions and variables

Macroscopic systems can be described in terms of thermodynamic variables. These are quantities such as volume (V), pressure (p), number of particles (N) and temperature (T). With the exception of temperature, lower-case letters are used for "intensive variables" (those that do not depend on the system size), and upper-case (capital) letters are used for "extensive" properties. Energy too is a well-known concept and comes in various guises: heat, mechanical and potential energy, for example.

Thermodynamics is a theory developed before an atomistic picture of matter existed and it does not rely on a microscopic model of matter. Statistical thermodynamics (or statistical mechanics) is the framework to understand the flow of energy between microscopic and macroscopic scales, and between different degrees of freedom in the system. A "degree of freedom" is a parameter describing the state of the system. For example, the positions and velocities of molecules are convenient degrees of freedom to describe a gas.

Two very useful concepts in statistical thermodynamics are "entropy" (S) (Slide 2), which is related to the number of configurations of the system Ω,

S=kBlnΩ,

Equation 1

(where kB is the Boltzmann constant), and "chemical potential", μ, which is the change in system energy occurring on the addition of one particle to the system.

Entropy is related to macroscopic variables via the thermodynamic definition of temperature,

Equation 2

A thermodynamic potential is a function of the thermodynamic variables in the system. There are a few commonly used potentials:

• internal energy, U (S, V, N)
• enthalpy H(S, p, N)
• Helmholtz free energy F(T, V, N) (or sometimes A(T, V, N))
• Gibbs free energy G(T, p, N)

(Note that some texts use F for Helmholtz free energy while some use A.)

Which potential should be used depends on the variables that control the system. In an experiment on a fixed number of molecules, at fixed ambient pressure, controlling temperature, the right potential is the Gibbs free energy. This is very often the case in biology. In a situation where, for example, a liquid was kept in a sealed container of fixed volume, then the system would be described by the Helmholtz free energy.

1.1.2 Biological systems

Thermodynamic equilibrium is achieved when the appropriate thermodynamic potential (e.g. Gibbs free energy) is minimised. That is, any change in the variables that control the system results in an increase in the thermodynamic potential. Biological systems are not in thermodynamic equilibrium and they are not isolated: they exchange energy with the environment and often take in, transform and give out molecules. However, much can be learned and described as if the system were in equilibrium, because different processes happen on different timescales. For example, if we are interested in the probability that within a cell a certain protein might bind (or not) to a DNA strand, this will depend on the concentration of free protein, on the interaction with the DNA, and on the fraction of available sites at that time. The process of binding and unbinding finds its equilibrium rapidly, and there is no need to take into account the fact that the DNA will undergo complex and non-equilibrium processes over longer times, when, for example, it is copied to make a new cell. So the fraction of adsorbed protein can be calculated within an equilibrium model.

It is not a coincidence that the objects of interest in biology are for the most part in the liquid state, and are formed as mixtures of many components. This has various advantages. For example, the mechanical behaviour, or the phase state, of biological systems can be tuned dramatically by subtle changes in composition or temperature. This is a far richer behaviour compared with that of solid-state single-component materials, which might form, for example, a very stable crystalline phase. Fluids, polymers, membranes and multicomponent mixtures consist of a large number of molecules, and in many cases their function and relevant properties arise from their collective properties. Fluctuations and random events dominate their behaviour at the microscopic level, and evolution has led to systems that make full use of the flexibility offered by these complex fluids. A great deal of information about the properties of complex fluids can be determined from relatively simple theory.

1.2 Canonical ensemble and partition function

In thermodynamics, a "canonical ensemble" is the name given to a collection of systems in contact with one another so that they can exchange energy. A system that is able to exchange energy with a heat-bath is one example, and this is probably the most useful ensemble for classical statistical mechanics in the context of biological physics. If an entire system is composed of a heat bath and a subsystem (Slide 3), where the entire system has energy Y and the subsystem has energy E, then the heat bath has energy (Y–E). If all configurations of the system have equal probability, then the "statistical weight", w(E), of the subsystem at energy E is

Equation 3

and hence, using the definition of entropy (equation 1),

Equation 4

If E is much less than Y, then the finite difference can be replaced,

Equation 5

From classical thermodynamics, we have

Equation 2

hence

Equation 6

1.2.1 Partition function

The statistical weight presented in Equation 6 is not a normalised probability. The normalising factor is the sum of all of the possible weights,

Equation 7

This function, Z, is known as the "canonical partition function" and it is a very powerful object, from which many thermodynamic properties of the system can be obtained. A central aim of statistical mechanics is therefore to express Z in terms of the microscopic knowledge of a system.

Let us see first how the Helmholtz free energy is related to Z. It is often possible to calculate the number of configurations that have a certain energy, and it can be convenient to replace the summation in Equation 7 with a sum over the energies,

Equation 8

where S is the entropy of the subsystem and Ω is the number of configurations in the subsystem with energy E.

Ω=Ω (E,V,T)

Equation 9

For large systems,Ω (E, V, T) is a fast-growing function of E. When multiplied by the exponential factor exp(-E/kBT), this leads to a sharply peaked function, at a value 〈E〉. The mean energy 〈E〉 is identified with the thermodynamic internal energy, U. So of all the terms in the sum of Equation 8, the term with E = U(T, V, N) dominates, resulting in

Equation 10

From Equation 10, it is seen that

Equation 11 a & 11 b

where

Equation 12

Average values of quantities can be calculated if Z is known. For example, the average energy of the subsystem is

Equation 13 a

Equation 13 b

At a fixed finite temperature, the thermodynamic variables of a system have a well-defined average and they exhibit "fluctuations" (i.e. random small departures from this average). The probability, P, of a fluctuation, Δx, occurring is given by the Boltzmann factor,

Equation 14

where ΔF is the change in thermodynamic potential (e.g. the Helmholtz free energy) corresponding to the change, Δx, in the thermodynamic variable (Slide 5). Note that in considering fluctuations around an equilibrium point, where the thermodynamic potential has a minimum, for small changes in Δx, the potential will increase quadratically (in proportion to (Δx)2).

The equipartition theorem states that in equilibrium, any degree of freedom, x, that contributes to the total energy only as a simple quadratic term, ax2, where a is a constant, has an average energy of ½kBT.

For example, the kinetic energy, Ek, in an ideal gas is given by

Equation 15

where m is the molecular mass and px, etc represent the three components of momentum (Slide 5). The expression is quadratic in each of the degrees of freedom (i.e. each momentum component), so the equipartition theorem applies to each component, giving

Equation 16