Molecular Simulation/Printable version

From Wikibooks, open books for an open world
Jump to navigation Jump to search


Molecular Simulation

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Molecular_Simulation

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

Charge-Charge Interactions

All intramolecular and intermolecular interactions are the result of electrostatic interactions between charged particles. All molecules are comprised the three subatomic particles: protons, neutrons, and electrons. Neutrons do not carry a charge, but protons and electrons carry charges with equal magnitude but opposite sign. The magnitude of these charges are fixed. This value is elementary charge, e. By convention, protons are defined as having positive charges and electrons are defined as having negative charges. The magnitude of these charges have a constant value known as the elementary charge, e=1.602176565(35) × 10-19 C. ε0 is the vacuum permittivity constant, which is equal to 8.854187817... 10-12 F/m (farads per metre).

The potential energy surface of the interaction of two charged particles, calculated using Coulomb's law.

The force between two charged particles due to this electrostatic interactions is,

Coulomb's law (force)

In this equation, is the distance between particles A and B. The charge of a particle is given by the variable q. A charge is a scalar quantity with a sign and a magnitude.

It is often more convenient to discuss intermolecular forces in terms of the potential energy of the interaction. The potential energy of the interaction of two charged particles, labelled A and B, can be determined by integrating the force experienced between the particles if they were moved from infinite separation where the intermolecular interaction is zero, to the distance () they are actually separated by,

Coulomb's law (potential energy)

Ionic molecules have charge-charge Coulombic interactions. If the charges have the same sign (e.g., two positive ions), the interaction is repulsive. If the charges have the opposite sign, the interaction is attractive.

Example[edit | edit source]

What is the potential energy of the electrostatic interaction between an Na+ ion and a Cl- ion separated by 5.00 Å?

Solution: These are both ions at intermediate distances, so their electrostatic interaction can be approximated using Coulomb's law. The charge-charge distance is given (5 Å). The charges, qA and qB, are the elementary charge of a proton/electron for the Na+ cation and the Cl- anion, respectively. These distances and charges must be converted to the SI units of m and C, respectively.

Further Reading[edit | edit source]

Wikipedia:Coulomb's law


Charge-Dipole Interactions

A cation (+) has an attractive interaction with the negative pole of a dipolar molecule, but a repulsive interaction with the positive pole of a dipolar molecule.

Charge-Dipole Interactions[edit | edit source]

Vector geometry can be used to derive an expression for the potential energy of an ion interacting with a dipolar molecule in terms of the distance and angle of orientation between the ion and the center of the dipole.

Charge-Dipole interactions occur in the presence of a atom with a formal net charge such as Na+ (qion = +1) or Cl- (qion=-1) and a dipole. A dipole is a vector () which connects two charged species of different signs i.e (qion=+1 with qion=-1 NaCl) over a distance

The dipole moment of a molecule depends on a few factors.

  1. Differences in electronegativity of the atoms in the molecule. The larger the difference, the greater the dipole moment.
  2. The distance between the positive and negative ends of the dipole. A larger distance corresponds to a larger dipole moment.
  3. Multiple polar bonds pointing in the same direction increase the molecular dipole moment were as if they are pointing in opposite directions the dipoles will cancel out each other.

For diatomic molecules, the dipole moment can be calculated by the formula were is the dipole moment q is the atomic charges and r is the distance.

polar bonds point in the same direction increase the molecular dipole moment
polar bonds in opposite directions will cancel out

from the potential energy of a Charge-Dipole interaction equation

you can see that there are 4 factors that affect the potential energy

  • the distance from the charge (qion) to the dipole ()
  • the magnitude of the dipole ()
  • the magnitude of the charge (qion)
  • the angle of the interaction

Depending on the angle the interactions can be repulsive when the charge is close to the like side of the dipole (i.e, + +) attractive if the charge is close to the opposite charged end of the dipole (i.e., + -) and zero if the positive and negative ends of the dipole are equal distances from the charge (e.g., perpendicular).

Derivation of Charge-Dipole Interactions Energy[edit | edit source]

Express interaction as the sum of Coulombic interactions.

Distance from charges to poles

This approximation is valid of the distance between the charge and dipole (r) is much greater than the length of the dipole, making the term negligible.

Substitute lengths into the potential energy equation








Example: Charge-Dipole Interaction[edit | edit source]

What is the interaction energy of a ammonia molecule (μ=1.42 D) with a Cl- ion that is 2.15 Å away when the dipole moment of ammonia is aligned with the Cl- ion at (θ=0)?

References[edit | edit source]


Dipole-Dipole Interactions

Two molecules that possess permanent dipole moments have a dipole-dipole interaction. Attractive interactions occur between the opposite-sign poles of the molecules (+/-). Repulsive interactions occur between the like-sign poles (+/+ and -/-).

Dipole-dipole interactions occur between two molecules when each of the molecules has a permanent dipole moment. These interactions can be attractive or repulsive depending on the variable orientation of the two dipoles.

A permanent dipole moment occurs when a molecule exhibits uneven charge distribution. This is caused by atoms of different electronegativities being bonded together. The electron density of the molecule will be polarized towards the more electronegative atoms. This excess of negative charge in one part of the molecule and the deficit of charge in another area of the molecule results in the molecule having a permanent dipole moment.

The orientation of the two interacting dipoles can be described by three angles (θ1, θ2, Φ). Angle Φ is the dihedral angle between the two dipoles, which corresponds to the rotation. It is this angle which accounts for the greatest change in potential energy of the electrostatic interaction. Attractive dipole-dipole interactions occur when oppositely charged ends of the dipole moments of two molecules are in closer proximity than the likely charged ends. When the opposite and like charged ends of the two dipoles are equidistant, there is a net-zero electrostatic interaction. This net-zero interaction occurs when the dihedral angle Φ is at 90°, i.e. the two dipoles are perpendicular to each other.

Two dipoles arranged in an orientation with a net-zero electrostatic interaction because the opposite-signed poles are at the same distance as the like-signed poles.
Dipole-dipole interaction in repulsive-parallel interaction orientation
The interaction between two dipolar molecules (AB and CD) can be described using vector geometry.

Derivation[edit | edit source]

The potential energy of a dipole-dipole interaction can be derived by applying a variation of Coulomb's law for point charges. By treating the two dipoles as a series of point charges, the interaction can be described by a sum of all charge-charge interactions, where the distance between charges is described using vector geometry.

Similarly to the derivation of a charge-dipole interaction, the derivation for dipole-dipole interactions begins by defining the distance between the centre of the two dipoles as r, and the length of each dipole as l. Using a method similar to the derivation of a charge-dipole interaction, trigonometry can be used to determine the distance of each vector corresponding to an interaction: , , , and .

This equation can be simplified so that the potential energy can be determined as a function of four variables:

Where μ1 and μ2 are the magnitude of the two dipole moments, r is the distance between the two dipoles and θ1, θ2, and Φ describe the orientation of the two dipoles. This potential energy decays at a rate of .

The ideal attractive potential occurs when the two dipoles are aligned such that Φ = θ1 = θ2 = 0. The interaction potential energy can be rotationally averaged to provide the interaction energy thermally-averaged over all dipole orientations.


Quadrupole-Quadrupole Interactions

The Quadrupolar Phenomenon[edit | edit source]

Contour plot of the equipotential surfaces of an electric quadrupole field.

A molecular quadrupole moment arises from an uneven distribution of charge inside a molecule. Unlike a molecular dipole moment, quadrupole moments cannot be described using two point charges separated by a distance.

Quadrupoles are the 2nd-order term in the multipole expansion of an electric field of a molecule. The general form of a molecular quadrupole is a rank-two tensor (i.e., a matrix). For linear quadrupoles, we can describe the quadrupolar interaction by only looking at the moment.

In chemistry, some special cases arise when looking at linear molecules such as carbon dioxide (CO2) and carbon disulfide (CS2). Each of these molecules has a quadrupole moment, but they are of opposite sign. These phenomena can be explained by invoking Pauling electronegativity arguments. Oxygen has a greater electronegativity than carbon, so there is more electron density around the terminal oxygen atoms than in the centre. The opposite is true for carbon disulfide: the sp hybridized carbon is more electronegative than the terminal sulfur atoms and so electron density is localized around the centre of the molecule. The physical manifestation of the two different electronic distributions is that carbon dioxide has a quadrupole moment of -39 C·m2, while that of carbon disulfide is = +1.2×10-39 C·m2

Figure 1. Molecular structures showing the regions of partial charge in a) carbon dioxide and b) carbon disulfide

Calculating Quadrupole-Quadrupole Interactions[edit | edit source]

Figure 2. CO2 molecules have quadrupole–quadrupole interactions with each other. The central carbon carries a partial positive charge, while the terminal oxygens carry partial negative charges. The like-charge atoms have repulsive electrostatic interactions (+/+ and -/-) while the oppositely-charged atoms have attractive electrostatic interactions (+/-).

The interaction energy of two linear quadrupoles is given by,

Linear Quadrupole–Quadrupole Interaction Energy

The quadrupole–quadrupole interaction energy is proportional to the reciprocal of the distance between the two particles () to the power of 5. Due to this higher exponent, quadrupole–quadrupole interactions become weaker as the intermolecular distance is increased at a faster rate than charge-charge interactions or charge-dipole interactions. The quadrupolar interaction energy also depends on the orientation of the two molecules with respect to each other. This is expressed in the orientational term,

Applications of Quadrupole-Quadrupole Interactions to Molecular Organization[edit | edit source]

Quadrupole-quadrupole interactions fall under the category of electric phenomena. That is, they are governed by Coulomb's Law just like charge-charge and dipole-dipole interactions. Thus, the conventions of 'like-repels-like and 'opposites attract' are permitted in describing quadrupole-quadrupole interactions. See Figure 1. for a visualization.

Benzene (C6H6) has a quadrupole moment resulting from its π bonding system. The most stable geometries of a complex comprised of two benzene molecules are the 'T-shaped' and 'parallel displaced', where the quadrupole-quadrupole attractive interaction is strongest.

Figure 3. Schematics of a) the regions of partial charge in benzene, b) the parallel-displaced attractive orientation of two benzene molecules and c) the T-shaped attractive orientation of two benzene molecules

Hexafluorobezene (C6F6) is another aromatic molecule which possesses a quadrupole moment, although it is negative. A partial negative charge on the periphery of the ring is created by the strongly electron-withdrawing fluorine atoms, which cause the centre of the ring to be electron deficient. The favoured alignments of 2 hexafluorobenzene molecules are still 'T-shaped' and 'parallel displaced'. Because of the complementary differences in electronic distribution, benzene and hexafluorobezene can now stack in a parallel geometry.

Figure 4. Schematics of a) the regions of partial charge in hexfluorobenzene and b) the interaction orientation of hexafluorobenzene and benzene molecules

Appendix[edit | edit source]

Complete Equation for Charge-Linear Quadrupole Interaction Energy

Complete Equation for Dipole-Linear Quadrupole Interaction Energy

Complete Equation for Linear Quadrupole–Quadrupole Interaction Energy


Induced Polarization

In isolation, atoms have spherical charge distributions with no dipole moment (top). Applied electric fields () push electron density in the direction of the field, creating a depletion of charge on one side of the atom ()and an increase in charge on the other side (). This creates an induced dipole moment.

When a molecule is exposed to an electric field ɛ, electron density is polarized away from the nuclei. A dipole is thus created due to the asymmetric distribution of electron density. The induced molecular dipole created is proportional to the strength of the electric field (ɛ) as well as the polarizability of the molecule, α.

Polarizability allows us to better understand the interactions that occur between non polar atoms and molecules and other electrically charged species. In general, polarizability correlates with the interaction between electrons and the nucleus.

Units of Polarizability[edit | edit source]

Dimensional analysis of induced polarization.

For convenience we will let:

Polarizability has units of volume, which roughly correspond to the molecular volume. A large polarizability corresponds to a "soft" cloud of electron density, which is easily distorted by electric field.

Polarizability in Intermolecular Interactions[edit | edit source]

When we consider the effect that a charge (Q), has on a neutral atom at distance r, an induced dipole is created:

Where the electrostatic force is defined by:

Charge-induced dipole electrostatic force can be approximated by the electrostatic force between and minus the force between and .

Since the electric field at distance r from a point charge Q:

Take the derivative




Therefore:



We can thus get the intermolecular potential by integrating this force over r:


A schematic of how a (i) two neutral atoms can have an attractive electrostatic interaction when (ii) the electron density of one atom (left) spontaneously polarizes to form an instantaneous dipole. (iii) This polarizes the neighbouring atom.


Repulsive Interactions

Pauli Exclusion Principle[edit | edit source]

For a system of two indistinguishable fermions (particles of half integer spin: protons, electrons,...etc.), the total wavefunction is a mixture of independent single-particle states and :

[1] , where a spatial variable.

Notice that the wavefunction vanishes if . In the case of an electron, the wavefunction also contains a spinor, , which distinguishes between spin up and spin down states. For simplicity, assume that both electrons have the same spin. This results in the Pauli Exclusion Principle:

Two electrons of the same spin cannot exist in the same state.

Since there are only two allowable spins (+1/2,-1/2), this guarantees that each orbital contains a maximum of two electrons of opposing spin.

Exchange Force[edit | edit source]

The apparent force that is observed when electron orbitals overlap originates from the requirement that the wavefunction be antisymmetric under exchange,

.

Electron Density Repulsion

This is commonly known as the exchange force. When the spin is symmetric (ex. , ) the spatial wavefunction must be antisymmetric (antibonding), resulting in a repulsive interaction. This is also the case for filled electron orbitals; helium with an electron configuration does not bond with itself because such interacting electrons reside in the destabilized antibonding orbital (verified with MO-diagram of He–He).

The Pauli repulsion energy[2] is of the form

, where , , are constants.

Notice that this parallels with the radial distribution of electron density around a nucleus (Figure: Electron Density Repulsion). Intuitively, the phenomenon of Pauli repulsion should only exist in the environment of electrons.

Comparison of Pauli Repulsion Formulae

It is of practical advantage to replace the exponential function with the simpler polynomial expression as in

, where is a constant.

A comparison of the polynomial and exponential form of the potential is illustrated on the right (Figure: Comparison of Pauli Repulsion Formulae). While the functions appear to be very different for small , the difference is immaterial considering that the probability of being in such a high energy regime is so low.

Comparison to Attractive Forces[edit | edit source]


Electrostatic Potential

The potential for electrostatic interactions, such as charge-charge, charge-dipole, dipole-dipole, and quadrupole-quadrupole, always takes on some variation of the form above, where and are integers signifying the order of the pole in the multipole expansion[3] (see Table: Range of Electrostatic Interactions).

Table: Range of Electrostatic Interactions
Monopole

n=0

Dipole

n=1

Quadrupole

n=2

Monopole

m=0

Dipole

m=1

Quadrupole

m=2

For the sake of completeness also consider the charge-induced interactions

Attraction and Repulsion

,

and and dipole-induced interactions

, , where is the polarizability of the molecule, is the ionization potential, and is the charge.

For electrostatic interactions following this general form, the potential energy for the interaction between oppositely-charged particles will approach negative infinity as the distance between approaches zero. This would suggest that if electrostatic interactions are the only forces in our model, oppositely-charged ions would collapse onto each other as approaches zero. Clearly this is an inaccurate representation of close range interactions, and some repulsive term must also be considered.

Pauli repulsion is a comparatively short range interaction, with . The slope of this curve is very large for small internuclear separations, . That is to say, the repulsive force rapidly diverges to infinity as the distance between nuclei approaches zero. For this reason it is said that the Pauli repulsive term is much "harder" than the other electrostatic forces.

Van der Waals Model of Fluid (Hard Sphere)[edit | edit source]

In liquids it is very improbable to have two particles at a highly repulsive distance (i.e., at distances where these functions diverge). Therefore, it is unnecessary to provide a perfect model at these distances, as long as it is repulsive. It is insightful to consider a fluid as collection of hard spheres, interacting with only Pauli repulsive forces. Neglecting the attractive forces between particles it justified by the following:

  • Attractive forces are weak and isotropic. For a uniform (constant density) fluid these forces will average to zero.
  • The cohesive force attributed to the change in entropy of gas to liquid phase is small for a dense gas; there is a small structural change between gas and liquid.
  • Repulsive forces are extremely strong to prevent any arrangement where atoms overlap.
  • Liquids arrange to minimize volume by maximizing packing of these hard spheres.  

Sources[edit | edit source]

  1. Griffiths, David J. Introduction to Quantum Mechanics. 2rd ed., Pearson Prentice Hall, 2004.
  2. Stone, A. (2013). The theory of intermolecular forces. 2nd ed. Oxford: Clarendon Press, pp.146.
  3. Stone, A. (2013). The theory of intermolecular forces. 2nd ed. Oxford: Clarendon Press, pp.43-56

See Also[edit | edit source]


Hydrogen Bonds

Hydrogen bonding is a type polar binding between a hydrogen atom that is bound to another atom with a large electronegativity, these then form electrostatic interactions with a hydrogen bond acceptor. Hydrogen bond acceptors fall into two main categories: atoms with a large negative charge such as Cl- or Br- and another compound with a polar bond, such as a ketone or carboxylic acid. These bonds are a subclass of dipole-dipole interactions. These interactions are found in many different scenarios, they are present in some organic solvents, solid crystal structures, proteins and DNA.

Hydrogen Bonding in Liquids[edit | edit source]

Hydrogen bonds are present in organic liquids like water, ethanol, hydrogen fluoride, and N-methylacetamide. These hydrogen bonds are partially responsible for the large enthalpies of vaporization, along with higher boiling points than hydrocarbons with comparable sizes. For example, the boiling point of hexane is 68 °C but the boiling point is hexanol is 135 °C. This is due to the hydrogen bonds that can be formed between the hydrogen atoms on the alcohols interacting with the oxygen atoms on other hexanol molecules.

The potential energy surfaces of the dimerization of hydrogen fluoride (black) and chloromethane (blue) calculated using MP2/aug-cc-pVTZ. Although the dipole moments of the two molecules are very similar, HF has a much more negative dimerization energy because the molecules can reach a shorter intermolecular distance before repulsive interactions become dominant.

Hydrogen Bonding in Solids[edit | edit source]

Crystal structure of the Ih phase of ice

Hydrogen bonds are also present in solids. The bonding in the solids is observed to allow for maximum packing of molecules that will give the solid the maximum number of hydrogen bonds leading to a stronger crystal. This can be observed in solid water, and is responsible for some of waters unique properties. Each water molecule is capable of donating two hydrogen bonds and accepting two hydrogen bonds.

Strength of Interactions[edit | edit source]

Magnitudes of hydrogen bonds are dependent on a couple different factors, the first is the difference in electronegativity of the atom that hydrogen is attached. The larger the difference in electronegativity the stronger the interaction will be. The second being the radius of the acceptor, it needs to be of a size that will allow for the hydrogen to get close enough to interact. Generally the Slater radius should be between 0.65 – 1.00 Å, any larger than this will not allow for the hydrogen to become close enough to the atom to attractively interact. The figure on the right illustrates the potential energies of the dimerization of HF and CH3Cl, They both have very similar dipole moments, and but due to the hydrogen bonding the interactions between the HF molecules a larger potential energy minimum can be obtained, illustrated by the larger well depth. Also in this figure the well appears at a lower radius for the HF, and this is due to the previously mentioned dependence on the radius of the acceptor, the F is smaller than the Cl allowing for the hydrogen to get closer for stronger interactions. The directionality of the hydrogen bond also has effects on the strength of the bond, the optimal angle of interaction is 180°, while the optimal distance for interaction is between 1.5 – 2.5 Å. Planar molecules can contain very strong hydrogen bonding capabilities. The molecule being flat allows for constructive bond dipoles and it avoids steric clashes between other atoms in the molecule.

Dependence on Distance[edit | edit source]

The strength of hydrogen bonds dissipate rather quickly, they follow the same exponential trend that dipole dipole interactions follow, which is illustrated by the equation below:

Dipole-Dipole Interactions

Where θ1 , θ2 and Φ are the two angles of orientation and the dihedral angles respectively. This equation illustrates that there is an inverse relationship between the strength of the bond and the distance between the dipoles cubed.

Effect of Radius


Rotational Averaging

Rotational Averaging[edit | edit source]

An ion interacts with a polar molecule. Vector geometry can be used to describe the energy of this interaction.

Rotational averaging describes the contribution to the potential energy from the rotational orientation of a charge-dipole interaction. Expectation values are utilized to give a single optimal value for the system's potential energy due to rotation.

For example, take a charged particle and a molecule with a permanent dipole. When they interact, the potential energy of this interaction can easily be calculated. For a dipole of length , with a radius of between the dipole centre and the charged particle, the energy of interaction can be described by:

Potential Energy of a Charge Dipole Interaction

where is the charge of the particle, is the dipole moment, is the angle between and the dipole vector, is the vacuum permittivity constant, and is the radius between the particle and dipole.

Geometrically, this interaction is dependant on the radius and the length of dipole, as well as the orientation angle. If the radius between the ion and dipole is taken to be a fixed value, the angle still has the ability to change. This varied orientation of results in rotation of the dipole about its center, relative to the interacting charged particle. The weights of various orientations are described by a Boltzmann distribution expectation value, described generally by:

Expectation value (discrete states)

Expectation value (continuous states)

where is the expectation value, is the energy value for a particular configuration, is Boltzmann's constant, and T is temperature. This Boltzmann-described weighting is the sum over the quantum mechanical energy levels of the system. Therefore, the probability is directly proportional to , indicating that at a specific temperature lower energy configurations are more probable. An equation can then be derived from this general expression, in order to relate it to the geometry and energy of a charge-dipole interaction.

Derivation of Rotationally-Averaged Charge-Dipole Interaction Potential[edit | edit source]

The orientationally averaged potential energy is the expectation value of the charge-dipole potential energy averaged over

Starting with the potential energy of a charge-dipole interaction

We let

This makes

The average over the dipole orientation using the expectation value in classical statistical mechanics is:

Note: When integrating over an angle, the variable of integration becomes

To solve this integral we must first use first order Taylor's series approximation because integrals of exponential of do not have analytical solutions.

The first order Taylor's series approximations is as follows:

Using the Taylor series with gives:

The integral now becomes:

Multiplying into the brackets will give:

All terms that do not depend on are constants and can be factored out of the integral. The terms can be expressed as 4 integrals:

We must use trigonometric integrals to solve each of these for integrals:

First:

Second:

Third:

Fourth:

Plugging each trigonometric solved integral back into the equation gives:

Finally replace with to give:

The Orientational Average of the Charge-Dipole Interaction Energy


The Lennard-Jones Potential

The Lennard-Jones 6-12 potential approximates the intermolecular interactions of two atoms due to Pauli repulsion and London dispersion attraction. The potential is defined in terms of the well-depth () and the intercept (). Other formulations use the radius where the minimum occurs, , instead of .

The Lennard-Jones potential describes the interactions of two neutral particles using a relatively simple mathematical model. Two neutral molecules feel both attractive and repulsive forces based on their relative proximity and polarizability. The sum of these forces gives rise to the Lennard-Jones potential, as seen below:

[1]

where ε is the potential well depth, σ is the distance where the potential equals zero (also double the van der Waals radius of the atom), and Rmin is the distance where the potential reaches a minimum, i.e. the equilibrium position of the two particles.

The relationship between the and is

Pauli Repulsion[edit | edit source]

The first half of the Lennard-Jones potential is Pauli-Repulsion. This occurs when two closed shell atoms of molecules come in close proximity to each other and their electron density distributions overlap. This causes high inter-electron repulsion and extremely short distances, inter-nuclei repulsion. This repulsion follows an exponential distribution of electron density:

where A and c are constants and r is the intermolecular distance. In a liquid, however, it is very unlikely that two particles will be at highly repulsive distances, and thus a simplified expression can be used by assuming the potential has a r-12 dependence (note that this high exponent means that the energy of repulsion drops off extremely fast as molecules separate). The resulting simple polynomial is as follows:

Where the C12 coefficient is defined as:

Buckingham[2]

Further Reading[edit | edit source]

London Dispersion[edit | edit source]

The Second half of the Lennard-Jones potential is known as London dispersion, or induced Dipole-Dipole interaction. While a particular molecule may not normally have a dipole moment, at any one instant in time its electrons may be asymmetrically distributed, giving an instantaneous dipole. The strength of these instantaneous dipoles and thus the strength of the attractive force depends on the polarizability and ionization potential of the molecules. The ionization potential measures how strongly outer electrons are held to the atoms. The more polarizable the molecule, the more its electron density can distort, creating larger instantaneous dipoles. Much like with Pauli Repulsion, this force is dependent on a coefficient, C6, and also decays as the molecules move further apart. In this case, the dependence is r-6

Where α' is the polarizability, usually given as a volume and I is the ionization energy, usually given as electron volts. Lastly, the C6 constant can also be expressed by the variables as seen in the Lennard-Jones equation:

Combination Rules[edit | edit source]

The estimation of Lennard-Jones potential parameters for mixed pairs of atoms (i.e., AB) using the Lorentz-Berthelot combination rule.

In the case of two separate molecules interacting, a combination rule called the Lorentz-Berthelot combination rule can be imposed to create new σ and ε values. These values are arithmetic and geometric means respectively. For example, an Ar-Xe L-J plot will have intermediate σ and ε values between Ar-Ar and Xe-Xe. An example of this combination rule can be seen in the figure to the right.

Example[edit | edit source]

Calculate the intermolecular potential between two Argon (Ar) atoms separated by a distance of 4.0 Å (use ϵ=0.997 kJ/mol and σ=3.40 Å).

References[edit | edit source]

  1. Lennard-Jones, J. E. (1924), "On the Determination of Molecular Fields", Proc. R. Soc. Lond. A, 106 (738): 463–477, Bibcode:1924RSPSA.106..463J, doi:10.1098/rspa.1924.0082.
  2. R. A. Buckingham, The Classical Equation of State of Gaseous Helium, Neon and Argon, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 168 pp. 264-283 (1938)]

See Also[edit | edit source]

Wikipedia:Lennard-Jones potential Chemwiki: Lennard-Jones


Intramolecular Forces

The representation of the potential energy of a torsional angle rotation using a cosine representation.

Bond Stretches[edit | edit source]

Bond stretches are the alteration of the bond length within a molecule. Using water as an example we will explain the different forms of stretches. The two modes of stretching are symmetric; to which in the case of water the two oxygen hydrogen bonds lengthen and shorten at the same rate and asymmetric; to which they lengthen and shorten at opposite times. The energy associated with this motion can be described using the following equation:

We see a Hooke's law relationship where k is a constant for the bond length and the x2 term in the Hooke's law equation is replaced with the change in bond length subtracted from the equilibrium bond length. This will give an average change in bond length. This relationship will give us an accurate value for the energy of the bond length of the given molecule.

Bond Angle Bending[edit | edit source]

Bond angle energies are brought about from an alteration of the optimized bond angle to a less favorable conformation. This can be either a decrease in angle; the orbitals get closer to one another which will increase the potential energy, or increase in angle; the orbitals more away from the ideal overlap which give the molecule favorable orbital interactions to increase the stability. This will increase the potential energy by moving the molecule out of the low energy conformation. This energy relationship can be observed through the following equation:

We again see a hooks law relationship where k is a constant for the bond length and the x2 term in the Hooks law equation is replaced with the change in bond angle subtracted from the equilibrium bond angle. This will give an average change in bond angle. This relationship will give us an accurate value for the energy of the bond length of the given molecule.

Torsional Rotations[edit | edit source]

The torsional potential energy surface of butane.

Torsional rotations, which can also be described as dihedral interactions come about when a system of 4 or more atoms form a molecule. This interaction relies upon the rotation central bond into a favorable orientation with the most stable overlap conformation. Some of these known conformations are called periplanar, anticlinal, gauche, and antiperiplaner, listed in order of highest to lowest energy. Observing Newman projections upon the energy we can see where the unfavorable confirmational interactions will lie. The methyl groups here will be considered large and have unfavorable overlap with one another. Mathematically this can be represented using the following equation:

Where k is the rotational barrier of the system, n is the periodicity as the rotations repeat around 360 degrees, ɸ is the dihedral angle, and δ is the offset of the function

Improper Torsion[edit | edit source]

An improper torsion is a special type of torsional potential that is designed to enforce the planarity of a set of atoms (ijkl) where i is the index of the central atom. The torsional potential is defined as the angle between the planes defined by atoms ijk and jkl.

Internal Non-Bonded Interactions[edit | edit source]

The atoms of a molecule will also have intramolecular steric, dispersive, and electrostatic interactions with each other. These interactions within a molecule determine its conformation. The steric repulsion as well as Coulombic and dispersive interactions all together in the conformation of the molecule. Every atom in the system can be assigned a partial charge (). These interactions are calculated using standard Lennard-Jones and Coulombic pairwise interaction terms,

Here we see the combination of the Leonard Jones equation as well as Coulomb's law. With this relationship we can calculate the force resulting between the non bonding intermolecular forces. It should be noted at this point that interactions between bonded or atoms forming angles are excluded from this relationship. These interactions have been included though the calculations in the previous sections.


Molecular Mechanical Force Fields

General Form of Potential Energy Function[edit | edit source]

The overall energy of a system is based on an enormous number of factors, and it is the sum of these individual attractive and repulsive forces between all the atoms in the system that results in a final "force field". The potential energy for a set of atomic coordinates (r) is the sum of bonded (ie intermolecular forces) and non-bonded (i.e., repulsion/dispersion and electrostatics) terms. This sum ultimately defines the molecular mechanical force field.

Due to this complexity, calculating the potential energy for a set of molecules using a force field would require a complete set of known parameters, which also require several different methods of determination. These methods are outlined in this section:

Interaction Term type Expression Parameters
Bonds Bonded spring constant (kbond), equilibrium length (re)
Angles Bonded spring constant (kangle), equilibrium angle (θe)
Dihedrals Bonded barrier height (kφ), periodicity (n), offset (δ)
Repulsion/Dispersion Non-bonded atomic radii (σ), well depth (ε)
Electrostatics Non-bonded partial atomic charges (q)

Parameterization[edit | edit source]

Empirical Parameters[edit | edit source]

For small molecules, parameters may be defined to fit experimental properties. Liquids will have different bulk physical properties for each set of non-bonded parameters. Each set of parameters in the force field will result in a different set of physical properties, the combinations of which can be tested until properly representing the properties of the liquid. An example would be determining the proper model for bulk water assuming no Lennard-Jones interactions for hydrogen atoms. Parameters would be tested together until the model properly represented the expected characteristics of bulk water.

Calculating Electrostatic Interactions[edit | edit source]

Starting with the non-bonded terms, we can look at electrostatics. Every intermolecular pair of atoms has this electrostatic interaction, and can be summed up in a form comparing all atoms in one molecule (A) to all atoms in another molecule (B). This quickly adds up to a massive sum even when considering systems of a few complex molecules:

In regards to partial charges, as there is no rigorous way to translate charge distribution to point charges, certain constraints are formed: sum of partial charges should equal formal charge of molecule (seen below), and equivalent atoms should carry the same charge. Charges can be chosen to match gas phase dipole moment, be treated as free parameters and adjusted for target properties, and to fit quantum mechanical electrostatics:

Charge fitting schemes fit partial charges to the quantum mechanical electrostatic potential (the potential energy a unit point charge has at position in space r). Potential can be calculated at a set of points around the molecule to generate electrostatic potential surface (ie: can map out the electronic density around the molecule and determine the localized area of charges):

Target Electrostatic Potential Data from Quantum Chemical Calculations[edit | edit source]

First term represents electron density, second term represents nuclear charges at positions Ri:

Molecular Mechanical Partial Atomic Charges[edit | edit source]

Looks at partial atomic charges at positions Ri:

In terms of electrostatic charges, point charges (q) must be defined as such that as closely as possible, thereby fitting both models.

Restraining terms can be introduced to give more practical models of molecules (known as Restrained Electronic Potential (RESP) fitting). Such restraints limits charges to moderate and chemically relevant values. Restraints are also placed to ensure chemically equivalent groups retain the same charges. These groups are given an atom type distinguished by element, hybridization and bonding partners, and parameters are defined for each type. For example, all aromatic carbons part of the same benzene ring should have the same parameters.

Generalized Amber Force Field (GAFF)[edit | edit source]

GAFF is a commonly used force field designed for reasonable parameters for simulations of small organic molecules. GAFF involves the use of the RESP method for assigning partial charges, and atom type defines the Lennard-Jones parameters and internal bonding terms that will be associated with the atom. Atom type definitions can be extremely specific (e.g., H on aliphatic C with 3 electron-withdrawing groups; inner sp2 C in conjugated ring systems).

Lennard-Jones Potential[edit | edit source]

A combination of the Pauli Repulsion and London dispersion attraction terms, the Lennard-Jones Potential is defined using two parameters: van der Waals radius (σ) and well depth (ε). These parameters are responsible for packing of molecules in liquids, and must compensate for errors in all other properties. van der Waals radius essentially sets the limit at which molecules may approach one another (potential energy > 0 when closer than σ). Well depth sets the energy well of the potential energy, occurring when distance between molecules is at Rmin (ie radius related to minimum potential energy). First term represents Pauli repulsion, second term represents London dispersion attraction:


Bonding Interactions[edit | edit source]

Bond Stretch[edit | edit source]

Atoms within a molecule may vary their distances (stretch or compress), and this relationship can be described as a spring. This is why Hooke's Law can be applied to bond lengths to define the energy required to stretch/compress, according to spring constant (kbond) and equilibrium length (re). Spring constant may be determined via the infrared vibrational frequency of the bond stretch (according to , while equilibrium bond lengths can be found through the use of rotational spectroscopy and crystallography.


Angle Stretch[edit | edit source]

The potential energy of bending a bond angle can also be approximated by Hooke's law. Hooke's Law can again be applied to bond angles to define the energy required to stretch/compress, according to spring constant (kangle) and equilibrium angle (θe). Again, the spring constant may be determined via the infrared vibrational frequency of the bond stretch, following the same formula, while the equilibrium angle can be found through the use of rotational spectroscopy and crystallography.

Dihedral Parameters[edit | edit source]

Also known as torsional rotations, dihedral relationships are produced with 4 atoms. This relates to how the central bond is rotated and the overlap of the outer two atoms (designated as 1,4 interactions, with 2 and 3 being the central atoms). These overlaps may result in a higher or lower potential energy depending on confirmation (i.e. gauche, trans, periplanar, etc), and the relationship is dependent on barrier height (kφ) (energy required to rotate the central axis), periodicity (n) and offset (δ). Determination of these parameters can be done using quantum chemistry or experimental isomer populations.



Calculation of molecular forces using quantum chemistry

Intermolecular Interactions[edit | edit source]

Intermolecular interactions can be calculated using quantum chemical programs.

The fragments are the atoms or molecules that comprise the complex.

Basis Set Superposition Error[edit | edit source]

Many quantum chemical programs use atom centered basis functions to represent the atomic orbitals. For a given calculation, the number of basis functions used is fixed and finite. Inevitably, the description of the orbitals will be incomplete, in that a more accurate answer could be obtained if more basis functions were added.

When calculating intermolecular interactions, the calculation of the will have the basis functions from all the fragments. At close intermolecular contacts, the tails of these orbitals will extend close to the space of atoms in other fragments. In this way, the fragments will be stabilized by the presence of these additional basis functions, so the energy of the complex includes both the stabilization due to the intermolecular interaction and this "basis set superposition error."

Counterpoise Corrections[edit | edit source]

Intramolecular Interactions[edit | edit source]

Dihedral Potentials[edit | edit source]


Statistical properties

Statistical thermodynamics describes physical descriptions according to probability distributions.

Probability Distributions[edit | edit source]

A probability distribution is a function that shows the likelihood of an outcome. In statistical mechanics, this generally means the probability of of a system being in a particular state.

Gaussian Distribution[edit | edit source]

A Gaussian distribution, also called a normal distribution is a function with a bell shaped distribution. Any random variable with this normal probability density

is called a normal variable with mean and variance [1]

The Boltzmann Distribution is one such probability distribution that gives the probability distribution as a function of a states energy and temperature if a system .

[2]

As shown below physical properties of a system can be calculated using this Boltzmann distribution.

Conformational Distribution[edit | edit source]

Within solids, liquids, and gases, atoms can take on different orientations and arrangements. Conformation changes within a molecule can occur by rotations around bonds. For example, gauche or eclipsed conformations, trans or cis and any conformation in between these would all fall under the distribution of conformation.

Calculating Physical Properties[edit | edit source]

Macroscopic properties are the average arrangement a system take son over time. Assuming all possible energy levels are continuous, we can employ classical mechanics to calculate physical properties. This assumption is only valid when particles and heavy and forces are relatively soft.

Averages[edit | edit source]

The ensemble average in statistical mechanics refers to the mean of all possible states of a system as given by its probability distribution. It is dependant on the ensemble chosen (i.e canonical, microcanonical etc...).

The expectation value gives the average value when any physical property of a system is measured. depending on the type of distribution, it may not be the most probable value but is the probability weighted value which we expect to measure. In a classical system, the expectation value is an integral over all of the possible configurations. The possible configurations are integrated over the interval .

where represents the property being calculated, represents the potential energy, represents the Boltzmann constant, and represents the temperature of the system.

In classical statistical mechanics, the classical ensemble average is the normalized Boltzmann weighted integral over all phase space.

Where is the Hamiltonian for the described system. This expression can be used to find many physical properties such as the average energy of a single particle system and many-body systems by integrating over spatial coordinates ( and integrating over the Maxwell distribution (

In a quantum mechanical system, the expectation value is the Boltzmann weighted sum over energy levels.

Example: Conformationally Averaged Dipole Moment[edit | edit source]

Conformational averaging using the Boltzmann distribution gives a way of finding the average dipole moment, as described in previous section.

The dipole moment of a molecule changes as its conformation changes as is it the vector sum of all bond moment and therefore the observed (or 'expected') value is a linear combination of the two conformations. For example, in 1,2-dichloroethane the trans conformation is non-polar but the cis conformation is polar. Since the molecular dipole is a vector quantity, the conformationally-averaged dipole moment is the average of the square of the individual dipole moments.


Using this formula, the conformationally averaged dipole moment for a molecule like 1,2 dichloroethane where the trans conformation has no dipole and the cis does can be calculated.

Variance[edit | edit source]

Variance is value that indicates how widely spread a set of data is when compared to the average. The variance is the expectation of the square of the standard deviation of a given set of data. If the standard deviation or variance is low, the data is close to the expected value. The variation of , a random variable, is the expected value of the square of the standard deviation from the mean. represents the expectation and represents the mean. Variance can be represented by or .


References[edit | edit source]


Thermodynamic ensembles

The entire universe is governed by the laws of thermodynamics through the transfer of energy between matter. This process is too complex to consider directly, instead we consider several simplifications where part of the universe (i.e., the system) is considered separately from the rest of its surroundings. The system houses the process or thermodynamic state under consideration, whereas the surroundings contain everything else.

The system can be described using an ensemble, also known as a statistical ensemble, which is an idealization consisting a large number of copies of a system, considered all at once, each of which represents a possible state that the real system might be in. A thermodynamic ensemble is a statistical ensemble that is in statistical equilibrium. It provides a way to derive the properties of a real thermodynamic systems from the laws of classical and quantum mechanics.

We can consider different systems with different degrees of separation from their surroundings, from completely isolated (e.g., microcanonical ensemble) to completely open to its surroundings (e.g., grand canonical ensemble). However, for the purposes of molecular simulations only three main ensembles will be considered. Namely, the microcanonical, canonical, and isothermal-isobaric ensembles

Microcanonical ensemble[edit | edit source]

The microcanonical ensemble is a system that is completely isolated from its surroundings. There is no transfer of energy or matter between the system and the surroundings, and the volume of the system remains fixed. In other words, this is a physical system where the total number of particles (N), the total volume (V), and the total energy (E) are constant. The microcanonical ensemble is typically abbreviated NVE.

Constant number of particles (N), volume (V), and total energy (E).

The total energy of the system is constant, but there is no defined temperature of the microcanonical ensemble. Temperature can only be defined for systems interacting with their surroundings. Isolated systems, such as microcanonical, are only described in terms of their composition, volume, and total energy. In statistical mechanics, a microcanonical ensemble is used to represent the possible states of a system which have a specified total energy.

Canonical ensemble[edit | edit source]

In the canonical ensemble, the volume of the system is fixed, and energy can transfer across the boundary between system and surroundings, but matter cannot. We can describe systems as being immersed in a heat bath at a temperature (T), where the heat bath is much larger than the system. No amount of heat put off by the system will significantly increase the temperature of the surroundings. The canonical ensemble applies to systems of any size; while it is necessary to assume that the relative size of the heat bath is very large, the system itself may be small or large. Now because the system and surroundings are in thermal contact, the system will transfer heat (q) to and from the surroundings until they are in thermal equilibrium. Therefore, unlike the microcanonical ensemble, the temperature of the canonical ensemble can be a defined constant (T). This ensemble is typically abbreviated in terms of constant number of particles, total volume, and temperature (NVT).

Constant number of particles (N), volume (V), and temperature (T).

The canonical ensemble is a statistical mechanics ensemble that represents the possible states of a mechanical system in thermal equilibrium with a heat bath at some fixed temperature. The system can exchange energy with the heat bath, so that the states of the system will differ only in total energy. In the microcanonical ensemble, the total energy is fixed, but in the canonical ensemble the energy is no longer constant. It can take on a range of values depending on the temperature. The canonical ensemble is important when attempting to describe the Helmholtz free energy of a system, which is the maximum amount of work a system can do at a constant volume (V) and temperature (T).

Isothermal-Isobaric ensemble[edit | edit source]

In the isothermal-isobaric ensemble, energy can transfer across the boundary, but matter cannot. The volume of the system can change such that the internal pressure of the system matches the pressure exerted on the system by its surroundings. Similar to the canonical ensemble, we can describe the isothermal-isobaric ensemble as a system immersed in a heat bath at a temperature (T), where the heat bath is much larger than the system. No amount of heat put off by the system will significantly increase the temperature of the surroundings.

The isothermal-isobaric ensemble is a statistical mechanical ensemble that maintains a constant total number of particles, and constant temperature (T) and pressure (p), typically abbreviated NpT. This ensemble plays an important role in chemistry since the majority of important chemical reactions are carried out under constant pressure conditions. The isothermal-isobaric ensemble is also important when attempting to describe the Gibbs free energy of a system, which is the maximum amount of work a system can do at constant pressure (p) and temperature (T).

Constant number of particles (N), pressure (p), and temperature (T).


Phase space

The Hamiltonian[edit | edit source]

The Hamiltonian, , is a function that calculates the sum of the potential and kinetic energy of a system. For the systems considered here, the Hamiltonian is a function of the positions (r) and momenta (p) of the particles in the system. The potential energy, , depends only on the positions of the atoms and the kinetic energy, , depends only on the momentum of the particles [1]. Therefore, the Hamiltonian can be separated into a potential energy term that depends only on the positions of the particles () and a kinetic energy term that depends only on the momenta of the particles (),

The microcanonical ensemble is the statistical ensemble used to represent the isolated system, where the number of particles (N), the volume (V), and the total energy (E) are constant.

Phase Space[edit | edit source]

The unique positions and momenta of each particle, for a system of N particles, can be accurately described by a 6 N dimensional spaced called phase space. Each particle in a system has three unique position variables ranging from 0 to L and three unique momentum variables ranging from to [1]. These 6N numbers constitute the microscopic state of the system at time t [2,3]. A system of N particles corresponds to a phase space with 6 N variables, which describes every conceivable position and momentum combination of all particles in a mechanical system [1,2,3].

Phase Point[edit | edit source]

A phase point refers to any one point in an N-body system at any time t [2]. The dynamics of the phase point are fully described by the motion and trajectory of the phase point as it travels through phase space [2]. Every classical state can be considered to be equally probable as N, V, and E are held constant [3].

Two Argon atoms inside phase space. The blue and green arrow vectors correspond to the atoms position and momenta, respectively. The atoms are non-interacting, however they do change trajectory as they bounce off the walls of the box.

Example[edit | edit source]

Consider a system containing two particles of Argon (labelled 1 and 2, respectively), where N = 2. Since phase space is described by 6 N dimensions the total number of spatial variables becomes 6(2) = 12.

  • r1, position of particle 1:
  • r2, position of particle 2:
  • p1, momentum of particle 1:
  • p2, momentum of particle 2:

Argon is a noble gas and inert to intermolecular interactions with other particles in a system. Accordingly, the two Argon atoms in this hypothetical system are non-interacting. When atoms are assumed to be non-interacting, potential energy interactions between particles are no longer accounted for and = 0.

The Hamiltonian becomes the sum of the kinetic energies of the two particles:

The Classical Partition Function[edit | edit source]

In quantum mechanical systems, the system must occupy a set of discrete (quantized) states. These states correspond to the energy levels of the system, labelled A partition function can be defined a sum over these states.

Where is the energy of state , is the Boltzmann constant and is the temperature.

Explicit sums are practical when the energy states of molecules are independent and there are no intermolecular interactions. In systems where intermolecular interactions are significant, it is generally more practical to approximate the system as existing in terms of continuous variables. The states are approximated to be continuous and can be integrated with respect to the phase space of the system while simultaneously sampling every state of the system. This leads to the classical partition function:

Many Body Systems[edit | edit source]

The expectation values of a classical system can be evaluated by integrating over phase space. The classical partition function is an integral over 6N phase space variables:

References[edit | edit source]

  1. McQuarrie, Donald A. 1973. Statistical Thermodynamics. New York, N.Y. Harper & Row, Publishers, Inc. Pages 117-118.
  2. Tuckerman, Mark E. 2010. Statistical Mechanics: Theory and Molecular Simulation. Oxford England. New York: Oxford University Press. Pages 2-5.


Dilute gases

A molecular dynamics simulation of high-density argon gas at 300 K. Argon-argon interactions are described using a Lennard-Jones potential. The gas density is 5282 mol per cubic meter. The pressure is approximately 13200 kPa.

Ideal Gas Law[edit | edit source]

Gasses are often described using ideal gas law, which relates the pressure of a gas to its density with a simple expression.

Where is the density of the gas, is the pressure of the gas, is the Boltzmann constant, equal to , and is the temperature in Kelvin. The expression is derived from approximating a gas as point masses that have only kinetic energy and experience perfectly elastic collisions. Unfortunately, this theory breaks down at densities where intermolecular forces become significant, that is, when the potential energy is non-zero. Ideal gas law is suitable for only very dilute gasses.

Virial Theorem[edit | edit source]

To more accurately describe the properties of dilute gasses the Virial equation of state is used. Virial theorem accounts for the effects of intermolecular forces, through an expansion into higher order functions of the density. Mathematically, this is described using infinite power series, where and are the second and third Virial coefficients.

At low densities the deviations from ideal gas behavior can be sufficiently described in the second Virial coefficient, .

Where is the volume of a gas and is the configurational integral. The configurational integral for the second Virial coefficient is the contribution of every possible pair of positions weighted over its Boltzmann distribution. For the third Virial coefficient the configurational integral would be the contribution of three interacting particles. By extension, the configurational integral for any nth Virial coefficient would be the contribution of n particles interacting. For this reason higher Virial coefficients are quite complicated to derive, fortunately they are only necessary for describing gasses at pressures above 10atm[3]. The configurational integral for two interacting particles is as follows:

Where is the potential energy of the interaction of a single pair of particles, and and are the positions of particles 1 and 2. The second virial coefficient can be written in terms of pairwise intermolecular interaction potential, , if the position of particle 2 is defined relative to the position of particle 1. The distance, , would then be the distance between the two interacting particles. The equation derived from this modification is as follows:

A different is derived for the hard sphere potential and the Lennard-Jones potential.

Hard Sphere Potential[edit | edit source]

The hard sphere model approximates particles as hard spheres that cannot overlap, if the spheres are not overlapping then the potential energy is zero and if they are overlapping then the potential energy is infinitely high. This approximation represents the very strong short range Pauli repulsion forces. The equation for the potential energy is as follows:

Where is the potential energy and is the radius of the hard sphere. Integrating the configurational integral for the hard sphere potential gives, , as the second Virial coefficient. This model is crude and only accounts for repulsive forces, a slightly more accurate model would be the Lennard-Jones potential model.

Lennard-Jones Potential[edit | edit source]

The Lennard-Jones potential is a combination of a polynomial repulsion term, , and a London dispersion attractive term, .

The and terms can be expanded, and internal energy, , can be expressed as:

Where is the potential well depth, is the intercept, and is the distance between the particles. The second Virial coefficient derived from the Lennard-Jones potential has no analytical solution and must be solved numerically.

The Lennard-Jones model is a more accurate than the hard sphere model as it accounts for attractive interactions and the repulsive term is more realistic than hard sphere repulsion. That being said, it is still limited in the fact that only London dispersion attractive interactions are considered, making the model applicable to only nobel gasses.

References[edit | edit source]


Potential of mean force

The potential of mean force is one of the 3 major forces considered in stochastic dynamics models. The potential of mean force provides a free energy profile along a preferred coordinate, be it a geometric or energetic coordinate, such as the distance between two atoms or the torsional angle of a molecule. This free energy profile describes the average force of all possible configurations of a given system (the ensemble average of the force) on particles of interest.. The potential of mean force can be determined in both Monte Carlo Simulations as well as Molecular Dynamics.

Relation to Radial Distribution Function[edit | edit source]

For a liquid, the potential of mean force is related to the radial distribution function,

The potential of mean force can be calculated directly by using histogram analysis of a trajectory from an MD or MC simulation. This analysis calculates the probability of each possible configuration, and determines the free energy change associated with that state. In systems where parts of the coordinate of interest will not be sufficiently sampled from a naive simulation, umbrella sampling, free energy perturbation, or other enhanced sampling techniques can be used.

Derivation[edit | edit source]

By integrating this function over 2 distance values, the reversible work associated with moving the two particles can be found. This work is equal to the change in Helmholtz energy associated with the change in the two states.

Sources[edit | edit source]

1. Chandler, D. Introduction To Modern Statistical Mechanics, Oxford University Press, 1987, QC174.8.C47

2. Tuckerman, M. Statistical Mechanics: Theory and Molecular Simulation, Oxford University Press, 2010

See Also[edit | edit source]

Wikipedia:Potential of mean force


Langevin dynamics

Langevin dynamics is used to describe the acceleration of a particle in a liquid.

1 [4]

The term term corresponds to the drag force divided by mass, m, of the particle. The drag force of the system is based on weak, long ranged intermolecular forces between atoms or molecules. This drag force is created by the liquid surrounding a particle. The drag force is based on the frictional constant of the system, , and the velocity of the particle, v. is . The frictional constant is proportional to the viscosity of the surrounding liquid, and the radius of the particle found from Stokes' Law.[4]

The term represents the random force. The random force of the system is based on short, strong Pauli repulsion between atoms or molecules. The random force in the liquid is experienced as a result of the collisions of a molecule with the surrounding liquid. This force changes rapidly with time, and changes much faster than it's velocity in denser liquids.[4]

Velocity of the Particle[edit | edit source]

(1) is a first order linear differential equation and can be solved formally to give (2).

2 [4]

The ensemble average of velocity is,

This is because the acceleration of the particle is due to a random force, and so the average acceleration will be 0 (i.e., ).

At short time intervals ( ), the average velocity becomes,

At long time intervals ( ), the average velocity becomes,

Displacement of the Particle[edit | edit source]

The Langevin Equation is used to express the rate of change of a particle's velocity. This in turn can be used to calculate the diffusion as diffusion depends on the velocity of a particle in a liquid. The Langevin equation can be used to sample the canonical ensembles of states. By integrating the velocity from time 0 to time t, the displacement of a particle can be found.[4]

3

Substituting the definition for velocity from (2) into (3),and integrating by parts gives

4

Squaring both sides of (4) and taking the ensemble average gives

5 [4]

The derivation is as follows,

is a rapidly varying function of only and it is non zero only when is small. So can be re-written as .[4] Let and , and the above equation becomes,

Let , and simplify, the above equation becomes

Equipartition theory applies when , and so , and the above equation becomes (5).[4]

At short time intervals () the part of the equation representing Brownian motion goes to zero ( i.e.

Let . Then,

6

This corresponds to ballistic motion of the particle.[4] At short time scales, the effect of friction and collisions have not affected the movement of the particle.

At long time intervals (), the velocity of the particle goes to zero.

At large values of t and so,

The diffusion constant D, is equal to .[4]

7

This corresponds to the particle undergoing a random walk.[4]

References[edit | edit source]

  1. Rozanov Y.A. (2015) Probability Theory: A concise Course, Dover Publications, USA
  2. McQuarrie, A. (2000) Statistical Mechanics, University Science Books, California
  3. McQuarrie, D. A. Statistical thermodynamics; University Science Books: Mill Valley, CA, 1997.
  4. a b c d e f g h i j k McQuarrie, Donald (2000). Statistical Mechanics. Harper and Row. p. 452. ISBN 06-044366-9. {{cite book}}: Check |isbn= value: length (help)


Periodic Boundary Conditions

A schematic representation of periodic boundary conditions. The unit cell of the system is in the center cell (pink). The potential energy of the system includes interactions between this center cell and its periodic images.
A schematic representation of the non-bonded interactions treated by the minimum image convention.
Liquids like water can be simulated using Periodic Boundary Conditions (PBC). A unit cell of the liquid is replicated periodically.
Molecular dynamics simulation of hexane in a periodic 30 Å × 30 Å × 30 Å simulation cell. Hydrogen atoms are omitted for clarity.

Macroscopic systems are extremely large and are, therefore, expensive to compute by molecular simulations. For instance, one gram of water has about 3 x 1022 molecules, a number too large to be calculated, even on computers. Fortunately, periodic boundary conditions enable us to mimic an infinite system by treating a relatively small part of a system to achieve a reasonable representation of the infinite system. The particles of this small subsystem are controlled by a set of boundary conditions called a unit cell (e.g., a three-dimensional box). During the simulation, particles are free to move in the central (original) cell; therefore, their periodic images of the adjacent cells move in an identical way. This means any particle that crosses one boundary of the cell, will reappear on the opposite side.

Non-Bonded Interactions Under Periodic Boundary Conditions[edit | edit source]

A particle in a periodic cell has non-bonded interactions with the other particles both in the original cell, and also in the surrounding cells (the periodic images of the original cell). The non-bonded potential energy () can be written as:

where , and is a translational vector to different cell images. For a periodic cubic cell of length , becomes

where , , and are vectors of integers. The number of possible translational vectors, , are infinite. This will lead us to an infinite number of non-bonded interactions. Thus, we need to perform some approximations to deal with this problem.

The first term of the is the Lennard-Jones potential. This potential has an attractive component () for London dispersion, which is a short range interaction. Accordingly, if the distance between interacting particles increases (i.e., r > 10 Å), the interaction becomes very weak. Thus, the interactions can be truncated for distances greater than 10 A. However, the potential will lose its continuity because of the unexpected truncation. This problem can be fixed by employing a switching function, which will smoothly scale down van der Waals interactions to zero at the cutoff distance. For faster computation, pair lists are designed to search for particles that could be inside the cutoff area within a specified range ( is ~2 Å larger than ), and, during the simulation, lists are updated.

The second term of the represents the long rang () electrostatic potential, which cannot be truncated the same way as the Lennard-Jones potential. However, the particle mesh Ewald method could be used for treating the long range interaction part.

Minimum Image Convention[edit | edit source]

The periodic boundary conditions use the minimum image convention to calculate distances between particles in the system. Let us consider that we have four particles (, , , and ) in a cubic box of length . Then, the potential interactions will be calculated between the particle () and the nearest periodic images of the other particles (, , and ). The minimum image convention was first used by Metropolis and coworkers.[1]

Periodic Boundary Conditions with NAMD[edit | edit source]

NAMD software requires three cell basis vectors to provide the periodic cell its size and shape: cellBasisVector1, cellBasisVector2, and cellBasisVector3. Each vector is perpendicular to the other two. The coordinates of the center of the periodic cell can be specified by the “cellOrigin” command. In addition, the “wrapAll on” command is usually used to ensure that any particle that crosses the boundary of the original cell will be imaged back into the cell.

References[edit | edit source]

  1. Metropolis, N.; Rosenbluth, A. W.; Rosenbluth, M. N.; Teller, A. H; Teller, E. J (1953). "Equation of State Calculations by Fast Computing Machines". The Journal of Chemical Physics. 21: 1078–1092. {{cite journal}}: Cite has empty unknown parameter: |1= (help); line feed character in |title= at position 39 (help)

Further Reading[edit | edit source]

Allen, M. P.; Tildesley, D. J. Computer Simulation of Liquids; Oxford University Press: New York, 1989.

Tuckerman, M. Statistical Mechanics: Theory and Molecular Simulation; Oxford University Press: New York, 2010.


Treatment of Long Range Forces

Treatment of long-range interactions in molecular simulations poses a serious challenge in terms of computational requirements. By employing periodic boundary conditions, bulk properties of a sample may be calculated efficiently using a limited number of molecules within a finite cell. Each molecule in the cell interacts with every other molecule, both within the same cell and within the other infinite periodic cell images. These interactions include dispersion, repulsion, and any other electrostatic interactions. Essentially, this means that there are an infinite number of intermolecular interactions that must be calculated during the simulation, which is a problem in practical computation.

As is common in molecular simulation, the Lennard-Jones potential is used to describe both dispersion and repulsion interactions. These interactions, relative to other electrostatic interactions, decay fairly quickly, and can thus be dealt with fairly easily in computation. Electrostatic interactions (such as charge-dipole, dipole-dipole, quadrupole-quadrupole, etc.) are often significantly stronger at a given distance , and decay at a much slower rate as increases, with the rate of decay increasing as the exponent of increases. This makes their treatment somewhat more complicated.

Cutoffs[edit | edit source]

The unmodified Lennard-Jones potential (blue) and the Lennard-Jones potential with a smooth potential-based cutoff using a switching function (red). The switching function is active in the grey region. The switched-potential is zero beginning in the blue region.

Non-bonded cutoffs are employed so as to save computational resources without sacrificing validity of results. These cutoffs are used to model long-range interactions between atoms or molecules in a simulation, and essentially truncate the interactions at a specific distance. Because intermolecular interaction potentials converge to a particular value (namely zero) as , the interaction potential can effectively be tuned to this value after a certain distance. In the case of dispersion and repulsion interactions, which are modelled by the Lennard-Jones potential, a switching function, , is employed in the simulation. This switching function causes the decay of the interaction to reach zero by a desired cutoff distance, at which point the two molecules or atoms involved in the intermolecular interaction do not interact through the Lennard-Jones potential at all,

The intermolecular distance at which the switching function is turned on is the switch distance, which is generally about 2 Å less than the cutoff distance. In between the switch distance and the cutoff distance, the interaction potential is gradually scaled to zero by the switching function. This gradual change in interaction potential maintains the physical basis of the simulation without using an abrupt, non-physical cutoff of interactions (as can be seen in the hard-sphere model for repulsive interactions). This method avoids a discontinuity in the interaction potential that would be caused by an abrupt truncation of the interaction potential function.

Such a convenient treatment of long-range interactions is easily applied to dispersion and repulsion interactions modelled by the Lennard-Jones potential, but is not applicable in all cases. For electrostatic interactions, other methods (such as Ewald summation, see below) must be employed. This is because electrostatic interactions are typically much longer-range than dispersion and repulsion interactions, and truncation through the use of simple switching functions could result in a serious decrease in accuracy of results. These interactions are stronger than dispersion or repulsion, and must be treated more carefully.

Ewald Decomposition[edit | edit source]

While truncation of the Lennard-Jones potential for dispersion and repulsion interactions is made relatively simple through the use of switching functions and non-bonded cutoffs, dealing with long-range electrostatic interactions is less straightforward, and requires a different approach. Evaluation of these interactions is commonly performed using the Ewald method, which divides electrostatic interactions into short-range and long-range components. The short-range component quickly decays to zero in real space, thus it can be truncated using a switching function, as was the case for the Lennard-Jones potential. The long-range component is more difficult to deal with, though it can be calculated efficiently in reciprocal space. In the Ewald summation method, the Gauss error function (erf) and complementary error function (erfc) are employed.[1] These functions can be seen as

The Ewald decomposition divides the Coulombic interaction into a short-range real space component and a long range reciprocal space component by factoring it with the erfc(r) and erf(r) identity. The real-space erfc(r)/r component converges to zero at short distances, avoiding truncation error.

and

with and . These functions are thus complementary. Also, note that

and

thus, by definition,

.

In using these Gauss functions, the term of the Coulombic interaction potential is broken down into the aforementioned short- and long-range terms,

where is a tunable constant chosen to ensure the electrostatic interaction decays to zero at the desired value. In this case, the term corresponds to the short-range interaction potential, which decays quickly to zero through evaluation in real space, while the term corresponds to the long-range interaction potential, which is more easily evaluated in reciprocal space.[2] This potential is evaluated as a sum of all possible charge-charge interactions across the infinite periodic images of the simulation cell. In reciprocal space, these interactions converge quickly, allowing efficient computation through application of fast Fourier transforms.[1] The sum of all charge-charge interactions at long range throughout all periodic images of the simulation system can be represented as

where is the distance vector between interacting charges and and is the translational vector between different periodic cell images. Equivalently, this sum can be written as a Fourier series,[1]

where is a reciprocal space expansion coefficient and is a reciprocal space vector. Note that while this sum is still infinite, quickly as , allowing efficient computation of long-range electrostatic interactions through application of fast Fourier transforms.

Particle Mesh Ewald[edit | edit source]

The particle mesh Ewald method is a numerical approximation to Ewald summation.[3] It involves a "smearing" of charges across a finite set of lattice points, located on a grid (or "mesh") that is placed over the simulation cell. By localizing charge density on this grid, the overall charge distribution across the simulation cell and its infinite periodic images is reduced to a more manageable form. This makes calculation of long-range electrostatic interactions significantly easier and less expensive, particularly in systems with a large simulation cell length, and thus, a large number of vectors. From the lattice of "smeared" charge densities, long-range interaction potentials can be calculated using a fast Fourier transform. While this approximation decreases the computational cost significantly, accuracy of collected data is still high. The particle mesh Ewald method is also relatively simple to integrate into a simulation.[4] These factors combine to make it an extremely attractive method to calculate long-range electrostatic interaction potentials.

See Also[edit | edit source]

Wikipedia article on Ewald summation


Monte Carlo Methods

Monte Carlo Methods are stochastic techniques that use random numbers to sample conformation space. In this method, a random conformation is generated then it would be determined whether to reject or accept it.

A Metropolis Monte Carlo simulation of Liquid Argon at 98 K

A Monte Carlo simulation starts from a given conformation, then random numbers will generate a new trial conformation. This trial conformation will be determined whether to be accepted or rejected. If it is accepted, this conformation will become the current conformation and the next iteration will start from this conformation.


Monte Carlo Integration[edit | edit source]

Using Monte Carlo method for evaluating . The random samples will be accepted with a probability of

Computing a definite integral is a simple illustrative example of using Monte Carlo method. Consider the simple function and its definite integral from 0 to 1:

In this case, the conformation space is two dimensional and random x and y values will be generated to make the new trial conformation. Only the trial points (a pair of x and y values) that are below the graph can be accepted. The number of accepted conformations divided by the total number of samples represents the probability for samples to be accepted. Multiplying this probability by the volume of the sampling space () is an approximation of the correct value of the integral.


A large number of samples will result a closer value to the expected value.


This method is usually used for higher-dimensional integrals.

Metropolis Monte Carlo Simulation[edit | edit source]

The schematic of the Metropolis Monte Carlo Algorithm. Moves that decrease the potential energy of the system () are always accepted. Moves that increase the potential energy () are accepted with a probability determined by the Boltzmann distribution.

In molecular simulations, Metropolis Monte Carlo Algorithm is usually used to accept or reject the trial conformation. If the energy of the trial conformation is lower than or equal to the current energy, it will always be accepted. If the energy of the trial conformation is higher than that of the current energy, then it will be accepted with a probability determined by the Boltzmann distribution.


At higher temperatures, high energy increases are more likely to be accepted. Conversely, at lower temperatures there is a greater probability of rejecting the state.

It means that, if , a random number () will determine whether to accept or reject this conformation.

Metropolis algorithm satisfies microscopic reversibility, meaning that the probability of being in a state and going to the next state is the same as being the new state and going back to the previous state.

Sampling[edit | edit source]

Configurational integral requires integration of Boltzmann distribution over all spatial variables. To employ Monte Carlo integration to estimate configurational integral (Z), a large number of configurations are randomly generated and Boltzmann weight is evaluated only at these points. With enough samples, configurational integral can be approximated.

But direct Monte Carlo sampling is extremely inefficient because a huge number of states must be sampled. Besides, there is an equal probability to sample high energy states as low energy states, but by Boltzmann distribution system is likely to be in a low energy state.

Metropolis algorithm changes the acceptance probability, so that the system samples low energy states preferentially.

Limitations of Monte Carlo Simulations[edit | edit source]

With Monte Carlo simulations, only an equilibrium distribution of states can be sampled. Moves are random and non-physical, so trajectories only corresponds to sequence of Monte Carlo moves and there is no temporal information, so dynamical properties like diffusion coefficients cannot be calculated from a Monte Carlo trajectory.

Equilibration[edit | edit source]

Metropolis Monte Carlo cycle favor transitions to lower energy and reach more probable structures. Many iterations of the metropolis Monte Carlo will move the system to an equilibrium state. Multiple properties can be plotted to determine when equilibration is complete.

Moving to equilibrium can be seen in a slow drift, equilibration is not complete until drift has stopped and properties fluctuate around average value.

Equilibrium properties[edit | edit source]

At equilibrium, the system is not undergoing a systematic change. There is no large scale gradient in physical properties of matter. If small perturbation is made to the system, the system will return to its equilibrium state. In equilibrium there is no momentum gradient, concentration gradient and temperature gradient in the system.

See Also[edit | edit source]


Molecular Dynamics

Molecular Dynamics Simulation of Argon

Molecular dynamics simulation is a tool of simulating motions of the atoms of a many-body system. Molecular dynamics is used to compute the equilibrium and transport properties (viscosity, thermal conductivity, diffusion, reaction rate, protein folding time, structure and surface coating) of a classical system. To simulate the dynamics of molecules, classical Newtonian mechanics can be used. In practice, molecular dynamics simulations are performed by moving in small increments of time. It is the solution of the classical equations of motion for atoms and molecules to obtain the evolution of a system. Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically, so MD is applied to these systems to solve this problem by using numerical methods.

Verlet Algorithm[edit | edit source]

One popular algorithm that predict positions of atoms at a later point in time is Verlet algorithm.[5] This algorithm approximates the position of particles at time (one time step in future from time t) by using a Taylor series approximation.

A Taylor series approximates a function at point x based on the values and derivatives of that function at another point a.

Derivation of the Verlet Algorithm[edit | edit source]

A Taylor series approximates an equation for the position of the molecule in .

First and second derivation of position correspond to velocity and acceleration, and acceleration is related to force through Newton’s second law.

Considering one time step backward in time:

Taking sum of positions at previous and next step:

Rearranges to:

The Verlet Algorithm

Derivation of the Velocity Verlet Algorithm[edit | edit source]

An alternative form of Verlet equation that involves velocities. By using Taylor series for later time step from current time:

Using current time step from the next time step:

By substitution:

The Velocity Verlet Algorithm

See Also[edit | edit source]

Wikipedia:Verlet integration Chemwiki: Lennard-Jones

Time step[edit | edit source]

The time step for the most of molecular dynamics simulations is on the femtosecond scale which is the scale of chemical bond vibrations. Molecular dynamics simulations are limited by the highest frequency vibration and normally the time step should be ten times lower that the highest frequency. Generally, molecular dynamics simulations of organic molecules use a timestep of 1-2 fs ( s) because of the fast vibrations of bonds containing hydrogen.

The first problem is that some processes, such as protein folding, require relatively long simulation times to observe in their entirety. To observe these processes in an MD simulation we need at least simulation with microseconds length.

This is a very big number of time steps to do, even for a computer. To solve this problem some constraints are applied. Many molecular dynamics simulations constrain all hydrogen containing bonds to be at equilibrium value. Because hydrogen is a very light atom and has very fast bond vibrations. By constraining these hydrogen containing bonds to be fixed at equilibration value, fast vibrations will be neglected.

Another problem is that the vibrational energy levels of the vibration of covalent bonds have large spaces between quantum energy levels so it is poorly described using classical mechanics, it is better to assume that bonds are rigid.

Time reversibility[edit | edit source]

An isolated system that is propagating through Newton's laws of motion should be time-reversible. This is to say, if you. In many cases molecular dynamics algorithms are time reversible. It means that by running the trajectory forward and then backward by the same number of steps, get back to the starting point by the same sequence of steps. So two MD simulation with the same initial conditions and time steps will follow exactly the same trajectories.

A second consequence of this is that the system is said to be deterministic. If you know the positions and velocities of the particles at a given point of time, and can calculate the forces acting on them in any position, in principle, you could integrate the laws of motion to determine the state of the system at any point in the future. In this way, we can determine the state of the system (i.e., positions and velocities) in the future based on the state we are now.

Sampling Thermodynamic Ensembles Using Molecular Dynamics[edit | edit source]

Microcanonical ensemble (NVE)[edit | edit source]

Conservation of the total kinetic and potential energy in a molecular dynamics simulation of liquid argon using the Verlet equation

A molecular dynamics simulation with an integrator that approximates Newtonian dynamics (e.g., the Verlet algorithm) will conserve the total energy of the system (i.e., the sum of kinetic and potential energies remains constant at each time step. As a result, these simulations will sample the microcanonical ensemble (NVE), because the total energy of the system (E) is conserved.

The Verlet equations and all other integrators have some error associated with the truncation of higher order terms in the Taylor series expansion. This error is more significant when larger time steps are used. Further, there is numerical error that results from the rounding error by performing these calculations on finite-precision computers (e.g., 32 or 64 bit registers). As a result, the total energy of a molecular dynamics simulation can drift from its initial value over the course of a simulation.

Sampling Different Ensembles[edit | edit source]

To sample a different ensemble, the equations of motion must be modified. This is commonly described as performing a molecular dynamics simulation with a thermostat or barostat.

Canonical ensemble (NVT)[edit | edit source]

In the canonical ensemble, amount of substance (N), volume (V) and temperature (T) are constant. In this ensemble, the energy of endothermic and exothermic processes is exchanged with a thermostat. Several methods have been developed that modify the equations of motion of a molecular dynamics simulation so that they sample the canonical ensemble, such as the wikipedia::Andersen thermostat and wikipedia::Nosé–Hoover thermostat.

Isothermal–isobaric ensemble (NPT)[edit | edit source]

In the isothermal–isobaric ensemble, amount of substance (N), pressure (P) and temperature (T) are conserved. Configuration of particles are Boltzmann-weighted this ensemble. Also, a barostat is needed in this ensemble.

Visualizing the MD Simulations[edit | edit source]

Molecular Dynamics Simulation of POPC bilayer

Time evolution of positions and velocities of the particles of the system is called a trajectory. This can be rendered into an animation that shows the motion of the particles over the course of the simulation. VMD is a popular program to visualize molecular dynamics simulations.

See Also[edit | edit source]

Wikipedia:Molecular Dynamics

Tuckerman Notes: The Hamiltonian formulation of classical mechanics

Literature[edit | edit source]

  1. a b c Tuckerman, M. E. (2009). Statistical Mechanics: Theory and Molecular Simulation. Oxford University Press.
  2. Allen, M. P.; Tildesley, D. J. (1987). Computer Simulation of Liquids. Clarendon Press.
  3. Herce, H. D.; Garcia, A. E.; Darden, T. (28 March 2007). "The electrostatic surface term: (I) Periodic systems". The Journal of Chemical Physics. 126 (12): 124106. Bibcode:2007JChPh.126l4106H. doi:10.1063/1.2714527. PMID 17411107.
  4. Darden, T.; York, D.; Pederson, L. (15 June 1993). "Particle mesh Ewald: An N·log(N) method for Ewald sums in large systems". The Journal of Chemical Physics. 98 (12): 10089. doi:10.1063/1.464397.
  5. Verlet, Loup (1967). "Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard−Jones Molecules". Physical Review. 159: 98–103. doi:10.1103/PhysRev.159.98.


Molecular Dynamics of the Canonical and Isothermal-Isobaric Ensembles

Distribution of the potential energy of a molecular simulation in the canonical (NVT) ensemble.
Isothermal-Isobaric Molecular Dynamics Simulation of Water

The Canonical and Isothermal-Isobaric Ensembles[edit | edit source]

An ensemble is a representation of various stated of the system in the thermodynamic equilibrium. The constraints operating in the system determined the type of ensemble.

A canonical ensemble represents the possible states of a system characterized by constant values of N, V, and T (constant volume and temperature). The energy of the microstates can fluctuate, giving a distribution of energies. In this ensemble, the system is in contact with a heat bath at a fixed temperature.

The isothermal–isobaric ensemble is a collection of systems characterized by the same values of N, P and T (constant temperature and constant pressure ensemble). This ensemble allows the volume and the energy to fluctuate, giving a distribution of energies and volume.This ensemble has Boltzmann-weighted configurations pressure of p, surrounded by a heat bath at temperature T.

Thermostats/ Barostats[edit | edit source]

A thermostat is a modification of the equation of motion to generate a statistical ensemble at a constant temperature. The most used thermostats in molecular dynamics are the Langevin, Anderson, and Nosé–Hoover Thermostat.

The Langevin thermostat is a Stochastic thermostat that applies friction and random force to momenta.

The Andersen thermostat assigns velocity of random particle to new velocity from Maxwellian distribution. In this thermostat the system is couple to a heat bath to impose the desired temperature. The equations of motion are Hamiltonian with stochastic collision term.

In this stochastic thermostat, dynamics are not physical, for this is not more time-reversible or deterministic.

The Nose-Hoover thermostat allows to simulate a system which is in the NVT ensemble. The idea is to introduce a fictitious degree of freedom. This approach couples the dynamics to the heat bath through the system Hamiltonian. The Nose equation is reversible and deterministic, and able to represent the distribution sample equivalent to a canonical ensemble.

In barostats, similar to the temperature coupling, an additional term is added to the equations of motion that effect a pressure change. Two of the must used barostat are the Anderson thermostat and the Parrinello-Rahman barostat.

Nose-Hoover thermostat derivation[edit | edit source]

In the Nose-Hoover thermostat the Hamiltonian have a fictitious degree of freedom for heat bath:

Where:

: is the momentum of the degree of freedom.

Q: is the effective mass

s: extended variable.

:is chosen to be the potential energy of the degree of freedom.


Equations of motion follow from Hamilton's equations.

Velocities of the particles


velocity of the "agent"


Acceleration of the particle


Acceleration on the agent


Umbrella Sampling

Umbrella sampling is a sampling method used in computational physics and chemistry. This sampling can sample the rare states which normal molecular dynamic sampling ignored. Therefore, umbrella sampling can improve free energy calculation when a system is undergoing a systematic change.

Biased molecular dynamics simulations[edit | edit source]

Normal MD simulations samples system in equilibrium. In an MD simulation of the time series of the C-C-C-C dihedral angle of n-butane(aq), only gauche states and trans states are sampled. Because this simulation is only performed in 2 ns, states with high free energy (e.g. cis state) are less likely to happen. These configurations are ignored and it is impossible to calculate the free energy of these states from this simulation. An artificial bias potential is needed in this case to help the molecule cross the energy barrier. With bias potential, rare states can be effectively sampled.

The time series of the C-C-C-C dihedral angle of n-butane(aq) from a 2 ns molecular dynamics simulation.

In this case, a harmonic bias potential is needed to counteract the dihedral barrier.

The time series of the C-C-C-C dihedral angle of n-butane(aq) from a 2 ns molecular dynamics simulation with a bias potential to stabilize the gauche conformations.

High free energy states were captured by biased simulation. In order to calculate the free energy profile of these states, biased probability distribution has to be converted to an unbiased probability distribution.

A isothermal-isobaric molecular dynamics simulation of butane in liquid water. A bias potential is applied to the C-C-C-C dihedral angle to facilitate sampling of the butane rotational barrier.
A isothermal-isobaric molecular dynamics simulation of butane in liquid water

Acquire free energy profile from biased simulations[edit | edit source]

The potential energy includes the bias potential at the reaction coordinate is

The probability distribution of this potential is

The probability distribution of unbiased potential is

From this equation, we can derive,

Free energy profile can be calculated from probability distribution by,

Using this relation, the PMF of the biased simulation can be converted to unbiased PMF by:

term is denoted as . It is generally a constant and in some cases does not affect the relative energy and no needed to calculate. It can be calculated by [1]:

File:Umbrella sampling PMF of n-butane.png
The potential of mean force for the C-C-C-C dihedral angle of n-butane calculated from a molecular dynamics simulation with a bias potential. The PMF from biased probability distribution is plotted in red. The bias potential is plotted in blue.

This method provides free energy profile of all possible states. In umbrella sampling of n-butane(aq), the chosen bias potential covered all reaction coordinates. General cases are more complex, which leads to a more complex determination of bias potential.

Choice of Bias Potential[edit | edit source]

The previous section discussed the biased molecular dynamic simulation of n-butane(aq). The reaction coordinate is one-dimensional and periodic, and the bias potential was chosen to be the negative the dihedral potential of n-butane[2]. The optimum bias potential is the opposite of the free energy [1]. However, is unknown for most cases. For general cases, the bias potential needs to be adjusted along the reaction coordinate. Thus, a harmonic bias potential restrained on a reference point with respect to a window on the reaction coordinate is introduced[2]:

Therefore, a full umbrella sampling can be obtained by a series of biased MD simulation on different reference points on the reaction coordinate.

Calculation of the Potential of Mean Force from Umbrella Sampling Data[edit | edit source]

The Weighted Histogram Analysis Method (WHAM)[3] transferred a series of biased distribution functions to a single distribution function. The value of needs to be estimated to give the correct value of :

The true distribution P(s) is the weight average of each step[1]:

And , where is the total number of steps sampled for window [3].

Combined with , both and can be obtained.

The other way to analyze umbrella sampling is Umbrella Integration, see[1].

See also[edit | edit source]

Wikipedia:Umbrella sampling

For more information about umbrella sampling, see[4]

References[edit | edit source]

  1. a b c d Kästner, Johannes (2011). "Umbrella sampling". Wiley Interdisciplinary Reviews: Computational Molecular Science. 1 (6): 932–942.
  2. a b Invalid <ref> tag; no text was provided for refs named r1
  3. a b Kumar, S; Rosenberg, JM; Bouzida, D; Swendsen, RH (1992). "The weighted histogram analysis method for free‐energy calculations on biomolecules. I. The method". Journal of computational chemistry. 13 (8): 1011–1021. {{cite journal}}: Cite has empty unknown parameter: |1= (help)
  4. Torrie, GM; Valleau, JP (February 1977). "Nonphysical sampling distributions in Monte Carlo free-energy estimation: Umbrella sampling". Journal of Computational Physics. 23 (2): 187–199. doi:10.1016/0021-9991(77)90121-8.


Free Energy Methods

Free energy perturbation and thermodynamic integration are molecular simulation methods to calculate the relative free energies (i.e., Gibbs or Helmholtz) of two states.


Replica Exchange Molecular Dynamics

A replica exchange molecular dynamics simulation of the folding of a peptide. The lowest temperature replica is labeled in bold].
A replica exchange molecular dynamics simulation where there are 4 replicas at different temperatures. m steps of molecular dynamics are completed between exchange attempts.

Replica Exchange molecular dynamics involves performing simultaneous simulations on many () copies of a system with a different value for a certain control variable. Parallel tempering is a replica exchange method in which temperature is used as the control parameter. A set of temperatures are assigned to the copies of a system satisfying . The probability distribution for an independent system depends on the configuration of all the atoms within a system and represents the complete set of configurations for copies of the system. Replica exchange simulations are performed on canonical ensemble (NVT) systems. An exchange of coordinates between a neighbouring pair of replica is attempted, and the attempted move is accepted or rejected based on Metropolis Monte Carlo criterion[1].

A schematic of a replica exchange simulation carried out on 4 replicas at different temperatures is shown. The m steps of MD are done on each individual replica between exchange attempts. The first attempted exchange between replicas 3 and 4 is made, , and the attempted exchange is accepted, and .The attempted exchange between replicas 1 and 2 is rejected, therefore, where and .

Advantages[edit | edit source]

The aim of replica exchange molecular dynamics is to ensure that high temperature configurations can cross barriers easily on the potential energy surface. The low temperature replicas sample the lowest energy minima, while the high temperature replicas are capable of sampling the entire surface, making lower temperature configurations accessible at higher temperatures. Furthermore, conventional molecular dynamics or Monte Carlo methods show extremely slow convergence when calculating equilibrium properties, therefore, it takes a long time for a configuration to escape a local minimum because the probability of crossing an energy barrier of height is proportional to . Replica exchange methods improve sampling efficiency by making it easier to cross energy barriers.

The Acceptance Probability[edit | edit source]

Simulation of two NVT systems where the second simulation is thermostated at a higher temperature, .

The probability distribution of System 1:

represents the coordinates of all atoms in system 1.

The probability distribution of System 2:

represents the coordinates of all atoms in system 2.

The probability of System 1 having coordinates and System 2 having coordinates :

The probability of System 1 having coordinates and System 2 having coordinates :

The probability of accepting the attempted exchange of coordinates between System 1 and 2 becomes:

Letting

Where and .

See Also[edit | edit source]


Coarse grain models

Coarse-Grain (CG) model is a molecular simulation model where atoms are transformed into a smaller number of chemical sites called beads. This model aims to reduce the complexity of the problem while keeping essential interactions for concerned properties. The reduction of the number of particles eliminates irrelevant vibrational mode, thus reduce the computational resource requirement of the molecular simulations.

The all atom and MARTINI coarse grain representations of a POPC lipid bilayer.

The Mathematical Representation[edit | edit source]

Let A is an n-by-3 matrix where each row denotes the 3D Cartesian coordinates of the atoms. In most of CG models, the CG map f is a linear transformation of matrix to a new m-by-3 matrix (with m<n) by a m-by-n matrix :

 

 

 

 

(1)

with M is the weights matrix.

In general, Noid et al [2] suggest 5 conditions to ensure that a CG model is consistent with the all-atom (AA) model:

  1. The relationship between AA coordinates and CG coordinates is linear, or it follows equation (1)
  2. For each CG bead, there is one AA atom that belongs that CG bead only
  3. The force in CG simulation satisfies the mean force of AA force field [2]
  4.  

     

     

     

    (2)

  5. No atom is involved in the definition of more than one CG site
  6. The CG mass is the weighted harmonic mean of corresponding atomic mass [2]
  7.  

     

     

     

    (3)

Common Coarse-Grain Model[edit | edit source]

There are two approaches to creating a CG model. The first approach, named "bottom-up," starts from the AA structure, or united-atom (UA) structure (structure with all hydrogen implicitly attached to the heavy atom). The CG model substitutes groups of atoms into predefined CG beads so that the new model recreates the structure and the interaction energy of the original model. On the other hand, the "top-down" approach starts from the structures of macromolecules obtained from experimental data. One can define the mapping and interaction between beads to stabilize these empirical structures.

There are several well-known CG model seen in literature, including:

  • Elastic Network Model: the amino acid in this model is represented by one bead in CG model. The interaction between the bead in this model is simple with the aim to recreate the native structure of protein obtained from experience.
  • Go-like model: similar to elastic network model, the Go-like model uses one bead for each amino acid. However, many non-bond interactions are calculated and parameterized in this model, leading to the ability of describe the folding protein. [3]
  • Direct Boltzmann inversion: having the energy form of bond, non-bond potential form of molecular dynamics, the method optimizes the parameters to mimic the atomistic distributions. In order words, the method reproduces the atomistic pair potential of mean force of the AA model to CG model.

The MARTINI Model[edit | edit source]

MARTINI model is a CG model introduced by Marrink and Tieleman [4]. The model aims to provide "a broader range of applications without the need to reparametrize the model each time." MARTINI is used to simulate the lipid system, peptide-membrane binding, or protein-protein recognition.

The bead in MARTINI usually contains four heavy atoms (excluding hydrogen). Unlike other CG models, MARTINI contains only a few CG types based on the charge and polarizability of the beads instead of the atom compositions. For the latest version, the model has four major interaction sites, including P for polar, N for non-polar, C for apolar, and Q for charged. Then, these types are expanded into 18 subtypes to describe detailed interactions.

See also[edit | edit source]

Wikipedia:Force_field_(chemistry)#Coarse-grained_force_fields
Wikipedia:MARTINI

References[edit | edit source]

  1. Tuckerman, M. E. Monte Carlo.Statistical mechanics: Theory and molecular simulation; Oxford University Press Inc.: New York, 2010; pp. 300-304.
  2. a b c Noid, W.G. (2008). "The multiscale coarse-graining method. I. A rigorous bridge between atomistic and coarse-grained models". The Journal of Chemical Physics. 128 (24): 244144. doi:10.1063/1.2938860. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  3. Tozzini, Valentina (April 2005). "Coarse-grained models for proteins". Current Opinion in Structural Biology. 15 (2): 144–150. doi:10.1016/j.sbi.2005.02.005.
  4. Marrink, Siewert J. (1 January 2004). "Coarse Grained Model for Semiquantitative Lipid Simulations". The Journal of Physical Chemistry B. 108 (2): 750–760. doi:10.1021/jp036508g. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)


Polarizable force fields

Polarizability[edit | edit source]

When a molecule is placed in various environment, the charge distribution in a molecule changes. This is what defines Polarizability. Force field plays a key role in molecular dynamics so that if these force fields are more accurate, they are able to describe physical reality. Consequently, the output which obtains from the simulation will be more precise. So, this is crucial to have a good approximation for magnitude of the partial charge, and its position at the nucleus. On the other hand, non-polarizable (fixed charge) models offer poor explanation or effective potential with approximations that cannot fully capture many-body effects.In other words, fixed charge models offer tractable descriptions and are fit for equilibrium properties for homogeneous systems.In contrast, since polarizable empirical force fields consist of many-body effects, it provides a clear and systematic development in functional form.

Drude Model (also known as Charges on Springs)[edit | edit source]

The Drude polarizable force field approximates the effect of induced polarization by adding negatively charged "Drude" particles to the system. These particles are tethered to the non-hydrogen atoms of the system with a harmonic potential. Electrostatic interactions with other charged particles cause the Drude particles to move in response, mimicking the effect of induced electron polarization. In this theory, Lennard-Jones potential is used to demonstrate dispersion interactions. Some positive features for Drude model are that it is relatively easy to implement within existing force fields, and also it is chemically instinctive.The serious of its weakness is that by using many extra charges, the number of interaction calculations increases. Atomic polarizability,, of a given atom can be obtained from equation (1) where is the charge on a Drude particle and is the force constant for the spring that connects this Drude particle to its parent nucleus.

(1)
Schematic of the SWM4-NDP Drude polarizable water molecule. The Drude particle and its harmonic tether is shown in red. The lone-pair particle (LP) is on the H-O-H bisector.
A SWM4-NDP Drude polarizable model water molecule is polarized by the electric field created by a neighbouring potassium ion. The Drude particle (red) is negatively charged and moves to maximize its interaction with the potassium.

AMOEBA Model (Atomic Multipole Optimized Energetics for Biomolecular Applications)[edit | edit source]

AMOEBA is one of the various force fields with the ability to describe electronic polarization. AMOEBA treats electrostatics using a combination of fixed and polarizable multipoles. In AMOEBA, the polarizable force field are modeled by an interactive atomic induced dipole. The induced dipole at each atom is the product of its atomic polarizability and the electrostatic field at this atom produced by permanent multipoles and induced dipoles of all other atoms. As a consequence, this future results in calculating of electrostatic interactions differ from the previous one which used a Coulomb potential to evaluate the charge–charge interactions. So, in AMOEBA should be added extra terms into the force field to account for the charge–dipole and dipole–dipole interactions. One advantage of the AMOEBA model is its emphasis on replication of molecular polarizability and electrostatic potentials, instead of just interaction energies.

Difference between AMOEBA and fixed-charge force fields: The AMOEBA potential has static atomic charges, point dipole moments, and point quadrupole moments centered on each atom. Each atom has a polarizable point-dipole moment is also included on each atomic center. The degree of polarizability is defined by an atomic polarizability parameter. The most stable direction and magnitude of these dipoles are calculated iteratively.

References[edit | edit source]

Baker, Christopher M. (2015). "Polarizable Force Fields for Molecular Dynamics Simulations of Biomolecules". Comput.Mol.Sci. {{cite journal}}: Check date values in: |year= (help); Italic or bold markup not allowed in: |journal= (help); Text "doi: 10.1002/wcms.1215" ignored (help)

"Current Status of the AMOEBA Polarizable Force Field". J.Phys.Chem. 114, 8: 2549–2564. 2019. doi:doi.org/10.1021/jp910674d. {{cite journal}}: Check |doi= value (help); Check date values in: |year= (help); Italic or bold markup not allowed in: |journal= (help)


Radial Distribution Functions

The radial distribution function (rdf) defines the probability of finding a particle at distance r from another tagged particle. Here, the distance r is between the oxygen atoms of two water molecules.
The radial distribution functions of solid (T = 50 K), liquid (T = 80 K), and gaseous argon (T = 300 K). The radii are given in reduced units of the molecular diameter ().


The radial distribution function (RDF) denoted in equations by g(r) defines the probability of finding a particle at a distance r from another tagged particle. The RDF is strongly dependent on the type of matter so will vary greatly for solids, gases, and liquids. The average density at any point in a liquid is referred to as the bulk density, ρ. This density is always the same for a given liquid. The density of the liquid at a given distance of r from another molecule is referred to as the local density, ρ(r), and is dependent on the structure of the liquids.

The radial distribution function can be evaluated as follows:

computing g(r)

where is a function that computes the number of particles within a shell of thickness , and is the spherical shell volume (with the approximation holding for small shell thicknesses.

The correlation function, g(r), relates the bulk density to the local density. The local density can be calculated as follows:

Calculating local density

Radial Distribution Function: Solids[edit | edit source]

The radial distribution function of solid argon at 50 K calculated used a molecular dynamics simulation. The argon-argon interactions are described using a Lennard-Jones potential.

Solids have regular, periodic structures, with molecules fluctuating near their lattice positions. The structure is very specific over a long-range, therefore it is rare to see defects in solids. Discrete peaks at values of σ, σ, σ, etc, can be seen in the RDF of a solid. Each peak has a broadened shape which is caused by particles vibrating around their lattice sites. There is zero probability of finding a particle in regions between these peaks as all molecules are packed regularly to fill the space most efficiently. Each peak represents a coordination shell for the solid, with the nearest neighbours being found in the first coordination shell, the second nearest neighbours being found in the second, and so on.

Radial Distribution Function: Gases[edit | edit source]

Gases do not have a regular structure which heavily influences their RDF. The RDF of a real gas will have only a single coordination sphere which will rapidly decay to the normal bulk density of a gas, g(r)=1. The RDF of a real gas is fairly simplistic, with values of g(r) as follows:

  • when
  • in the area where London Forces are strongest when
  • when

Radial Distribution Function: Liquids[edit | edit source]

The radial distribution function of liquid argon at A molecular dynamics simulation of liquid argon at 91.8 K and 1.8 atm pressure under periodic boundary conditions.

Liquids follow the hard-sphere model of repulsion indicating that there is zero density when atoms overlap. Due to their ability to move dynamically liquids do not maintain a constant structure and lose all of their long-range structure. The first coordination sphere for a liquid will occur at ~σ. At large values of r, the molecules become independent of each other, and the distribution returns to the bulk density (g(r)=1). The first peak will be the sharpest and indicates the first coordination sphere of the liquid. Subsequent peaks will occur roughly in intervals of σ but be much smaller than the first. Liquids are more loosely packed than solids and therefore do not have exact intervals. Although it is possible for there to be multiple coordination spheres, there is a depleted probability of finding particles outside the first sphere due to repulsion caused by the first sphere.

Coordination Numbers[edit | edit source]

The coordination number indicates how many molecules are found in the range of each coordination sphere. Integrating g(r) in spherical coordinates to the first minimum of the RDF will give the coordination number of a molecule.

Calculating the coordination number

A coordination number of 12 reflects the optimal packing of hard spheres. Due to the fact that attractive forces are weak and isotropic and that repulsive forces are very strong and short-range many molecules can be approximated as "hard spheres". This indicates that simple liquids, a liquid that does not experience hydrogen bonding or electrostatic forces, will pack in the most efficient way and will avoid repulsive interactions. The most efficient way for a fluid of spherical particles to fill a volume is to have 12 neighbors to every molecule. This explains why many simple liquids, such as argon, have a coordination number of 12.

Liquids which have hydrogen bonding and electrostatic interactions, such as water, will have a much lower coordination number (water has a coordination number of 4-5 in the first sphere). This is influenced by the fact that these more complex liquids will try to maximize their hydrogen-bonding interactions in the first sphere. Thus results in more energetic but less efficient packing. The RDF of these liquids will have a notably sharper peak for the first coordination sphere which will occur earlier than that of a simple liquid.

A Molecular Dynamics Simulation of Liquid Water at 298 K
The radial distribution functions of argon and water, calculated from molecular simulation. The argon rdf shows regular and definite coordination spheres. The water rdf is dominated by the first coordination sphere. Secondary coordination spheres are much less ordered than in simple liquids.
The first coordination sphere of water indicated on the O-O rdf.
The "kissing number" problem in 3 dimensions is the number of non-overlapping spheres that can be in contact with a central sphere. This simple model correctly predicts the coordination number of a molecule of a simple fluid should be 12.

See Also[edit | edit source]

Wikipedia:Radial distribution function
SklogWiki: Radial distribution function

References[edit | edit source]

Chandler, David. Introduction to Modern Statistical Mechanics. Oxford University Press. ISBN 9780195042771. {{cite book}}: Cite has empty unknown parameter: |1= (help)


Dielectric constants

The dielectric constant of a liquid can be calculated from the magnitude of the variance in the net dipole moment of a cubic simulation cell, M,

where V is the volume of the simulation cell. is the infinite frequency dielectric constant. For non-polarizable models, this term is equal to 1.


Diffusion Coefficients

Einstein Equation[edit | edit source]

The mean square displacement as a function of time of an oxygen molecule diffusing water and pentane. The plots were calculated from a molecular dynamics simulation of a periodic simulation cell containing the oxygen molecule and the solvent. These plots can be used to calculate the diffusion coefficient of the solute using the Einstein relation.
A molecular dynamics simulation of liquid argon in a periodic simulation cell where the particle positions are not "wrapped" to be placed in the opposing face of the cell when they cross the boundary. This is used to calculate diffusion coefficients using the Einstein equation.

Green-Kubo Theory[edit | edit source]

The diffusion coefficient of a liquid can be calculated from the velocity autocorrelation function of an MD simulation in the canonical ensemble,

where v(t) is the velocity of the particle at time t. Note that care must be taken in using an MD simulation with a stochastic thermostat, which introduce an external source of friction and random force.

See Also[edit | edit source]

Wikipedia:Einstein relation (kinetic theory)
Wikipedia:Diffusion


Interfacial properties

Surface Tension[edit | edit source]

The surface tension (γ) of a liquid is defined as, , where G is the Gibbs energy and A is the surface area of the liquid. At equilibrium, the surface tension of liquid is positive, indicating that the Gibbs energy of the system would increase if the surface area were increased. The SI unit for surface tension is N/m, but dyn/cm is sometimes reported instead.

Physically, this reflects that molecules at the liquid-vapor interface have fewer neighbors than molecules in the bulk liquid. This generally means that the intermolecular interactions the molecule engages in are weaker. Increasing the surface area would result in more molecules being at the interface, so this is thermodynamically disfavored. The surface tension is an indicator of the degree to which it is disfavored. This effect is particularly strong in associated liquids like water, which becomes more ordered and has fewer hydrogen bonds at the water-vapor interface.

The surface tension of a liquid can be calculated in a molecular simulation by measuring differences in the averages of the pressure tensor for an NVT simulation where the simulation contains a slab of the liquid surrounded by vapor layers along the z axis,

Surfactants[edit | edit source]

See Also[edit | edit source]

Wikipedia:Surface_tension


Transport properties

Viscosity[edit | edit source]

The viscosity of a liquid is defined by the drag force experienced by a moving plate separated from a stationary plate by a layer of the liquid.

u is the speed of the moving plate. h is distance by which the two plates are separated. A is the surface area of the moving plate.

can be calculated from a molecular dynamics simulation using the Green–Kubo relation,

Pxy is the xy component of the pressure tensor. The xz or yx off-diagonal components can be used instead.

Thermal Conductivity[edit | edit source]

The thermal conductivity of a liquid is defined by the rate of heat transfer between an upper and lower plate at different temperatures (T1 and T2).

The Green-Kubo relation for thermal conductivity relates lambda to the autocorrelation function of the heat current,

See Also[edit | edit source]

Wikipedia:Viscosity Wikipedia:Green–Kubo_relations


Solids

Figure 1: A molecular dynamics simulation of solid argon at 50 K. Argon-argon interactions are described using a Lennard-Jones potential. The atoms are arranged in a face-centered cubic cell.
Figure 2: The radial distribution functions of solid (T = 50 K), liquid (T = 80 K), and gaseous argon (T = 300 K). The radii are given in reduced units of the molecular diameter (.

The Structure and Dynamics of Simple Solids[edit | edit source]

Solids have regular periodic structures where the atoms are held at fixed lattice points in the 3-D rigid framework. This regular repeating structure can take different forms and is usually represented by a unit cell, where the entire solid structure is the repeated translations of this cell. Since the atoms cannot move freely, this allows solids to have a fixed volume and shape, which is only distorted by an applied force. However, atoms in solids do still have motion. They vibrate around their lattice positions and therefore their position fluctuates slightly around the lattice point. This can be thought of as the atom being tethered to the point, but being able to slightly vibrate around it. The intermolecular interactions between the atoms keep them in their fixed positions. These forces depend on the composition of the solid and could include forces such as London dispersion, dipole-dipole, quadrupole-quadrupole, and hydrogen bonding. [1] The temperatures at which these solids occur are low enough that the atoms do not have enough energy to overcome these forces and move away from their fixed position. This keeps the solid tightly packed and eliminates most of the space between atoms or molecules. This gives solids their dense property. Figure 1 shows a molecular dynamics simulation of solid argon at 50 K, where the atoms are all vibrating around their fixed lattice points. Argon atoms only have london dispersion intermolecular forces, which are described by the Lennard-Jones potential. The Lennard-Jones equation below accounts for the London dispersion attractive forces and the Pauli repulsion forces between atoms.

The intermolecular distance between the argon atoms is equal to . This means that the atoms are at the same distance as the minima of this potential energy function. This maximizes the intermolecular forces by giving the most negative potential. The atoms in the solid argon are held together by these strong forces of attraction and are tightly packed to minimize empty space.

Differences in the Radial Distribution Function[edit | edit source]

The radial distribution function relates the bulk density of a solid, liquid, or gas to the local density at a distance from a certain molecule or atom. The equation that relates these parameters is found below.[2]

The radial distribution functions of solid, liquid, and gaseous argon can be seen in Figure 2. In a solid, particles are found at defined positions, which is shown by the discrete peaks at values of , , , , etc. The peaks of this radial distribution function are also broadened due to the molecules fluctuating around their lattice positions and occupying slightly different positions in this range. The regions of the function with are regions where there is a zero probability of finding another molecule or atom. There is a zero probability between the peaks in a solid radial distribution function because of the regular structure where all of the molecules are packed tightly to most efficiently fill the space. This leaves regular intervals of spaces where no atoms or molecules are present. Also, each peak in a radial distribution function represents a coordination sphere where there is a high probability of finding molecules.[3] Each subsequent peak represents a coordination sphere that is farther from the origin molecule and therefore the nearest neighbours are in the first coordination sphere. It is also important to note that when . In this scenario, the electron density clouds of the two atoms are overlapping, causing the potential energy to be prohibitively high.

In contrast, the radial distribution function of a gas only has one peak/coordination sphere, which then decays to the bulk density, represented by . This simple radial distribution function is a consequence of a density that is so low that only the interactions of individual pairs of gas molecules affect the radial distribution function. The density is higher around the origin molecule due to strong london dispersion forces in this area, but the forces decay off quickly. The radial distribution function of liquids also differs from that of the solids. Molecules in a liquid have the ability to move around, but their positions are still correlated due to intermolecular forces between the molecules. This allows liquids to have periodic peaks in the radial distribution function, as shown in Figure 2. Thus, liquids also have coordination spheres where it is more likely to find molecules at these distances from an origin molecule, and thus there will be a greater local density at these positions. However, there is still a lower density than in solids due to the fluidity of liquids and the molecules being able to change positions. The radial distribution function of a liquid has its peaks at intervals of , which is due to the looser packing of the molecules in a liquid compared to in a solid.[2] This looser packing is due to the coordination spheres not being bound to fixed positions. There is also a lower probability of finding molecules in the second coordination sphere due to Pauli repulsion interactions with the first sphere. Due to the disordered nature of liquids the radial distribution function eventually decays to one and returns to the bulk density at large values as the positions are no longer correlated to each other.

Simple liquids, such as liquid argon, are packed most efficiently to avoid repulsive interactions between the atoms, but there is still some spaces between them. Solids are packed very tightly so that the empty space between them is as little as possible and most crevices are filled. Their fixed positions allow them to maintain this tight packing of the atoms to minimize wasted space. Liquids also try to minimize this space, but are less tightly packed than solids because of their ability to move around and change positions. They have more energy to overcome the intermolecular forces correlating their positions. This difference in packing is seen in the radial distribution functions with the occurrence of the solid peaks at closer intervals than the liquid peaks.

References[edit | edit source]

  1. 2017. Basic Chemistry/States of Matter. https://en.wikiversity.org/wiki/Basic_Chemistry/States_of_matter (accessed November 10, 2017)
  2. a b Rowley, C. Chemistry 4305 Radial Distribution Functions. Memorial University of Newfoundland, St. John's, NL. Accessed 10 November 2017.
  3. Liquids and Solutions: 2nd Year Michaelmas Term: Introduction. http://rkt.chem.ox.ac.uk/lectures/liqsolns/liquids.html (accessed November 10, 2017)


Definition of variables

Variable Quantity
potential energy
kinetic energy
H enthalpy
U internal energy
G Gibbs energy
A Helmholtz energy
dipole moment
quadrupole moment
charge


Molecular Dynamics Simulations Using NAMD

A NAMD simulation is initiated by running the program namd2 with the input file as its argument. namd2 input.conf > output.log