Molecular Simulation/Printable version
This is the print version of Molecular Simulation You won't see this message or any elements not part of the book's content when you print or preview this page. |
The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Molecular_Simulation
Charge-Charge Interactions
All intramolecular and intermolecular interactions are the result of electrostatic interactions between charged particles. All molecules are comprised the three subatomic particles: protons, neutrons, and electrons. Neutrons do not carry a charge, but protons and electrons carry charges with equal magnitude but opposite sign. The magnitude of these charges are fixed. This value is elementary charge, e. By convention, protons are defined as having positive charges and electrons are defined as having negative charges. The magnitude of these charges have a constant value known as the elementary charge, e=1.602176565(35) × 10^{-19} C. ε_{0} is the vacuum permittivity constant, which is equal to 8.854187817... 10^{-12} F/m (farads per metre).
The force between two charged particles due to this electrostatic interactions is,
Coulomb's law (force)
In this equation, is the distance between particles A and B. The charge of a particle is given by the variable q. A charge is a scalar quantity with a sign and a magnitude.
It is often more convenient to discuss intermolecular forces in terms of the potential energy of the interaction. The potential energy of the interaction of two charged particles, labelled A and B, can be determined by integrating the force experienced between the particles if they were moved from infinite separation where the intermolecular interaction is zero, to the distance () they are actually separated by,
Coulomb's law (potential energy)
Ionic molecules have charge-charge Coulombic interactions. If the charges have the same sign (e.g., two positive ions), the interaction is repulsive. If the charges have the opposite sign, the interaction is attractive.
Example
[edit | edit source]What is the potential energy of the electrostatic interaction between an Na^{+} ion and a Cl^{-} ion separated by 5.00 Å?
Solution: These are both ions at intermediate distances, so their electrostatic interaction can be approximated using Coulomb's law. The charge-charge distance is given (5 Å). The charges, q_{A} and q_{B}, are the elementary charge of a proton/electron for the Na+ cation and the Cl- anion, respectively. These distances and charges must be converted to the SI units of m and C, respectively.
Further Reading
[edit | edit source]
Charge-Dipole Interactions
Charge-Dipole Interactions
[edit | edit source]Charge-Dipole interactions occur in the presence of a atom with a formal net charge such as Na^{+} (q_{ion} = +1) or Cl^{-} (q_{ion}=-1) and a dipole. A dipole is a vector () which connects two charged species of different signs i.e (q_{ion}=+1 with q_{ion}=-1 NaCl) over a distance
The dipole moment of a molecule depends on a few factors.
- Differences in electronegativity of the atoms in the molecule. The larger the difference, the greater the dipole moment.
- The distance between the positive and negative ends of the dipole. A larger distance corresponds to a larger dipole moment.
- Multiple polar bonds pointing in the same direction increase the molecular dipole moment were as if they are pointing in opposite directions the dipoles will cancel out each other.
For diatomic molecules, the dipole moment can be calculated by the formula were is the dipole moment q is the atomic charges and r is the distance.
from the potential energy of a Charge-Dipole interaction equation
you can see that there are 4 factors that affect the potential energy
- the distance from the charge (q_{ion}) to the dipole ()
- the magnitude of the dipole ()
- the magnitude of the charge (q_{ion})
- the angle of the interaction
Depending on the angle the interactions can be repulsive when the charge is close to the like side of the dipole (i.e, + +) attractive if the charge is close to the opposite charged end of the dipole (i.e., + -) and zero if the positive and negative ends of the dipole are equal distances from the charge (e.g., perpendicular).
Derivation of Charge-Dipole Interactions Energy
[edit | edit source]Express interaction as the sum of Coulombic interactions.
Distance from charges to poles
This approximation is valid of the distance between the charge and dipole (r) is much greater than the length of the dipole, making the term negligible.
Substitute lengths into the potential energy equation
Example: Charge-Dipole Interaction
[edit | edit source]What is the interaction energy of a ammonia molecule (μ=1.42 D) with a Cl^{-} ion that is 2.15 Å away when the dipole moment of ammonia is aligned with the Cl^{-} ion at (θ=0)?
References
[edit | edit source]
Dipole-Dipole Interactions
Dipole-dipole interactions occur between two molecules when each of the molecules has a permanent dipole moment. These interactions can be attractive or repulsive depending on the variable orientation of the two dipoles.
A permanent dipole moment occurs when a molecule exhibits uneven charge distribution. This is caused by atoms of different electronegativities being bonded together. The electron density of the molecule will be polarized towards the more electronegative atoms. This excess of negative charge in one part of the molecule and the deficit of charge in another area of the molecule results in the molecule having a permanent dipole moment.
The orientation of the two interacting dipoles can be described by three angles (θ_{1}, θ_{2}, Φ). Angle Φ is the dihedral angle between the two dipoles, which corresponds to the rotation. It is this angle which accounts for the greatest change in potential energy of the electrostatic interaction. Attractive dipole-dipole interactions occur when oppositely charged ends of the dipole moments of two molecules are in closer proximity than the likely charged ends. When the opposite and like charged ends of the two dipoles are equidistant, there is a net-zero electrostatic interaction. This net-zero interaction occurs when the dihedral angle Φ is at 90°, i.e. the two dipoles are perpendicular to each other.
Derivation
[edit | edit source]The potential energy of a dipole-dipole interaction can be derived by applying a variation of Coulomb's law for point charges. By treating the two dipoles as a series of point charges, the interaction can be described by a sum of all charge-charge interactions, where the distance between charges is described using vector geometry.
Similarly to the derivation of a charge-dipole interaction, the derivation for dipole-dipole interactions begins by defining the distance between the centre of the two dipoles as r, and the length of each dipole as l. Using a method similar to the derivation of a charge-dipole interaction, trigonometry can be used to determine the distance of each vector corresponding to an interaction: , , , and .
This equation can be simplified so that the potential energy can be determined as a function of four variables:
Where μ_{1} and μ_{2} are the magnitude of the two dipole moments, r is the distance between the two dipoles and θ_{1}, θ_{2}, and Φ describe the orientation of the two dipoles. This potential energy decays at a rate of .
The ideal attractive potential occurs when the two dipoles are aligned such that Φ = θ_{1} = θ_{2} = 0. The interaction potential energy can be rotationally averaged to provide the interaction energy thermally-averaged over all dipole orientations.
Quadrupole-Quadrupole Interactions
The Quadrupolar Phenomenon
[edit | edit source]A molecular quadrupole moment arises from an uneven distribution of charge inside a molecule. Unlike a molecular dipole moment, quadrupole moments cannot be described using two point charges separated by a distance.
Quadrupoles are the 2nd-order term in the multipole expansion of an electric field of a molecule. The general form of a molecular quadrupole is a rank-two tensor (i.e., a matrix). For linear quadrupoles, we can describe the quadrupolar interaction by only looking at the moment.
In chemistry, some special cases arise when looking at linear molecules such as carbon dioxide (CO_{2}) and carbon disulfide (CS_{2}). Each of these molecules has a quadrupole moment, but they are of opposite sign. These phenomena can be explained by invoking Pauling electronegativity arguments. Oxygen has a greater electronegativity than carbon, so there is more electron density around the terminal oxygen atoms than in the centre. The opposite is true for carbon disulfide: the sp hybridized carbon is more electronegative than the terminal sulfur atoms and so electron density is localized around the centre of the molecule. The physical manifestation of the two different electronic distributions is that carbon dioxide has a quadrupole moment of ^{-39} C·m^{2}, while that of carbon disulfide is = +1.2×10^{-39} C·m^{2}
Calculating Quadrupole-Quadrupole Interactions
[edit | edit source]The interaction energy of two linear quadrupoles is given by,
Linear Quadrupole–Quadrupole Interaction Energy
The quadrupole–quadrupole interaction energy is proportional to the reciprocal of the distance between the two particles () to the power of 5. Due to this higher exponent, quadrupole–quadrupole interactions become weaker as the intermolecular distance is increased at a faster rate than charge-charge interactions or charge-dipole interactions. The quadrupolar interaction energy also depends on the orientation of the two molecules with respect to each other. This is expressed in the orientational term,
Applications of Quadrupole-Quadrupole Interactions to Molecular Organization
[edit | edit source]Quadrupole-quadrupole interactions fall under the category of electric phenomena. That is, they are governed by Coulomb's Law just like charge-charge and dipole-dipole interactions. Thus, the conventions of 'like-repels-like and 'opposites attract' are permitted in describing quadrupole-quadrupole interactions. See Figure 1. for a visualization.
Benzene (C_{6}H_{6}) has a quadrupole moment resulting from its π bonding system. The most stable geometries of a complex comprised of two benzene molecules are the 'T-shaped' and 'parallel displaced', where the quadrupole-quadrupole attractive interaction is strongest.
Hexafluorobezene (C_{6}F_{6}) is another aromatic molecule which possesses a quadrupole moment, although it is negative. A partial negative charge on the periphery of the ring is created by the strongly electron-withdrawing fluorine atoms, which cause the centre of the ring to be electron deficient. The favoured alignments of 2 hexafluorobenzene molecules are still 'T-shaped' and 'parallel displaced'. Because of the complementary differences in electronic distribution, benzene and hexafluorobezene can now stack in a parallel geometry.
Appendix
[edit | edit source]Complete Equation for Charge-Linear Quadrupole Interaction Energy
Complete Equation for Dipole-Linear Quadrupole Interaction Energy
Complete Equation for Linear Quadrupole–Quadrupole Interaction Energy
Induced Polarization
When a molecule is exposed to an electric field ɛ, electron density is polarized away from the nuclei. A dipole is thus created due to the asymmetric distribution of electron density. The induced molecular dipole created is proportional to the strength of the electric field (ɛ) as well as the polarizability of the molecule, α.
Polarizability allows us to better understand the interactions that occur between non polar atoms and molecules and other electrically charged species. In general, polarizability correlates with the interaction between electrons and the nucleus.
Units of Polarizability
[edit | edit source]Dimensional analysis of induced polarization.
For convenience we will let:
Polarizability has units of volume, which roughly correspond to the molecular volume. A large polarizability corresponds to a "soft" cloud of electron density, which is easily distorted by electric field.
Polarizability in Intermolecular Interactions
[edit | edit source]When we consider the effect that a charge (Q), has on a neutral atom at distance r, an induced dipole is created:
Where the electrostatic force is defined by:
Charge-induced dipole electrostatic force can be approximated by the electrostatic force between and minus the force between and .
Since the electric field at distance r from a point charge Q:
Take the derivative
Therefore:
We can thus get the intermolecular potential by integrating this force over r:
Repulsive Interactions
Pauli Exclusion Principle
[edit | edit source]For a system of two indistinguishable fermions (particles of half integer spin: protons, electrons,...etc.), the total wavefunction is a mixture of independent single-particle states and :
^{[1]} , where a spatial variable.
Notice that the wavefunction vanishes if . In the case of an electron, the wavefunction also contains a spinor, , which distinguishes between spin up and spin down states. For simplicity, assume that both electrons have the same spin. This results in the Pauli Exclusion Principle:
Two electrons of the same spin cannot exist in the same state.
Since there are only two allowable spins (+1/2,-1/2), this guarantees that each orbital contains a maximum of two electrons of opposing spin.
Exchange Force
[edit | edit source]The apparent force that is observed when electron orbitals overlap originates from the requirement that the wavefunction be antisymmetric under exchange,
.
This is commonly known as the exchange force. When the spin is symmetric (ex. , ) the spatial wavefunction must be antisymmetric (antibonding), resulting in a repulsive interaction. This is also the case for filled electron orbitals; helium with an electron configuration does not bond with itself because such interacting electrons reside in the destabilized antibonding orbital (verified with MO-diagram of He–He).
The Pauli repulsion energy^{[2]} is of the form
, where , , are constants.
Notice that this parallels with the radial distribution of electron density around a nucleus (Figure: Electron Density Repulsion). Intuitively, the phenomenon of Pauli repulsion should only exist in the environment of electrons.
It is of practical advantage to replace the exponential function with the simpler polynomial expression as in
, where is a constant.
A comparison of the polynomial and exponential form of the potential is illustrated on the right (Figure: Comparison of Pauli Repulsion Formulae). While the functions appear to be very different for small , the difference is immaterial considering that the probability of being in such a high energy regime is so low.
Comparison to Attractive Forces
[edit | edit source]
Electrostatic Potential
The potential for electrostatic interactions, such as charge-charge, charge-dipole, dipole-dipole, and quadrupole-quadrupole, always takes on some variation of the form above, where and are integers signifying the order of the pole in the multipole expansion^{[3]} (see Table: Range of Electrostatic Interactions).
Monopole
n=0 |
Dipole
n=1 |
Quadrupole
n=2 | |
---|---|---|---|
Monopole
m=0 |
|||
Dipole
m=1 |
|||
Quadrupole
m=2 |
For the sake of completeness also consider the charge-induced interactions
,
and and dipole-induced interactions
, , where is the polarizability of the molecule, is the ionization potential, and is the charge.
For electrostatic interactions following this general form, the potential energy for the interaction between oppositely-charged particles will approach negative infinity as the distance between approaches zero. This would suggest that if electrostatic interactions are the only forces in our model, oppositely-charged ions would collapse onto each other as approaches zero. Clearly this is an inaccurate representation of close range interactions, and some repulsive term must also be considered.
Pauli repulsion is a comparatively short range interaction, with . The slope of this curve is very large for small internuclear separations, . That is to say, the repulsive force rapidly diverges to infinity as the distance between nuclei approaches zero. For this reason it is said that the Pauli repulsive term is much "harder" than the other electrostatic forces.
Van der Waals Model of Fluid (Hard Sphere)
[edit | edit source]In liquids it is very improbable to have two particles at a highly repulsive distance (i.e., at distances where these functions diverge). Therefore, it is unnecessary to provide a perfect model at these distances, as long as it is repulsive. It is insightful to consider a fluid as collection of hard spheres, interacting with only Pauli repulsive forces. Neglecting the attractive forces between particles it justified by the following:
- Attractive forces are weak and isotropic. For a uniform (constant density) fluid these forces will average to zero.
- The cohesive force attributed to the change in entropy of gas to liquid phase is small for a dense gas; there is a small structural change between gas and liquid.
- Repulsive forces are extremely strong to prevent any arrangement where atoms overlap.
- Liquids arrange to minimize volume by maximizing packing of these hard spheres.
Sources
[edit | edit source]- ↑ Griffiths, David J. Introduction to Quantum Mechanics. 2rd ed., Pearson Prentice Hall, 2004.
- ↑ Stone, A. (2013). The theory of intermolecular forces. 2nd ed. Oxford: Clarendon Press, pp.146.
- ↑ Stone, A. (2013). The theory of intermolecular forces. 2nd ed. Oxford: Clarendon Press, pp.43-56
See Also
[edit | edit source]
Hydrogen Bonds
Hydrogen bonding is a type polar binding between a hydrogen atom that is bound to another atom with a large electronegativity, these then form electrostatic interactions with a hydrogen bond acceptor. Hydrogen bond acceptors fall into two main categories: atoms with a large negative charge such as Cl^{-} or Br^{-} and another compound with a polar bond, such as a ketone or carboxylic acid. These bonds are a subclass of dipole-dipole interactions. These interactions are found in many different scenarios, they are present in some organic solvents, solid crystal structures, proteins and DNA.
Hydrogen Bonding in Liquids
[edit | edit source]Hydrogen bonds are present in organic liquids like water, ethanol, hydrogen fluoride, and N-methylacetamide. These hydrogen bonds are partially responsible for the large enthalpies of vaporization, along with higher boiling points than hydrocarbons with comparable sizes. For example, the boiling point of hexane is 68 °C but the boiling point is hexanol is 135 °C. This is due to the hydrogen bonds that can be formed between the hydrogen atoms on the alcohols interacting with the oxygen atoms on other hexanol molecules.
Hydrogen Bonding in Solids
[edit | edit source]Hydrogen bonds are also present in solids. The bonding in the solids is observed to allow for maximum packing of molecules that will give the solid the maximum number of hydrogen bonds leading to a stronger crystal. This can be observed in solid water, and is responsible for some of waters unique properties. Each water molecule is capable of donating two hydrogen bonds and accepting two hydrogen bonds.
Strength of Interactions
[edit | edit source]Magnitudes of hydrogen bonds are dependent on a couple different factors, the first is the difference in electronegativity of the atom that hydrogen is attached. The larger the difference in electronegativity the stronger the interaction will be. The second being the radius of the acceptor, it needs to be of a size that will allow for the hydrogen to get close enough to interact. Generally the Slater radius should be between 0.65 – 1.00 Å, any larger than this will not allow for the hydrogen to become close enough to the atom to attractively interact. The figure on the right illustrates the potential energies of the dimerization of HF and CH_{3}Cl, They both have very similar dipole moments, and but due to the hydrogen bonding the interactions between the HF molecules a larger potential energy minimum can be obtained, illustrated by the larger well depth. Also in this figure the well appears at a lower radius for the HF, and this is due to the previously mentioned dependence on the radius of the acceptor, the F is smaller than the Cl allowing for the hydrogen to get closer for stronger interactions. The directionality of the hydrogen bond also has effects on the strength of the bond, the optimal angle of interaction is 180°, while the optimal distance for interaction is between 1.5 – 2.5 Å. Planar molecules can contain very strong hydrogen bonding capabilities. The molecule being flat allows for constructive bond dipoles and it avoids steric clashes between other atoms in the molecule.
Dependence on Distance
[edit | edit source]The strength of hydrogen bonds dissipate rather quickly, they follow the same exponential trend that dipole dipole interactions follow, which is illustrated by the equation below:
Dipole-Dipole Interactions
Where θ_{1} , θ_{2} and Φ are the two angles of orientation and the dihedral angles respectively. This equation illustrates that there is an inverse relationship between the strength of the bond and the distance between the dipoles cubed.
Effect of Radius
Rotational Averaging
Rotational Averaging
[edit | edit source]Rotational averaging describes the contribution to the potential energy from the rotational orientation of a charge-dipole interaction. Expectation values are utilized to give a single optimal value for the system's potential energy due to rotation.
For example, take a charged particle and a molecule with a permanent dipole. When they interact, the potential energy of this interaction can easily be calculated. For a dipole of length , with a radius of between the dipole centre and the charged particle, the energy of interaction can be described by:
Potential Energy of a Charge Dipole Interaction
where is the charge of the particle, is the dipole moment, is the angle between and the dipole vector, is the vacuum permittivity constant, and is the radius between the particle and dipole.
Geometrically, this interaction is dependant on the radius and the length of dipole, as well as the orientation angle. If the radius between the ion and dipole is taken to be a fixed value, the angle still has the ability to change. This varied orientation of results in rotation of the dipole about its center, relative to the interacting charged particle. The weights of various orientations are described by a Boltzmann distribution expectation value, described generally by:
Expectation value (discrete states)
Expectation value (continuous states)
where is the expectation value, is the energy value for a particular configuration, is Boltzmann's constant, and T is temperature. This Boltzmann-described weighting is the sum over the quantum mechanical energy levels of the system. Therefore, the probability is directly proportional to , indicating that at a specific temperature lower energy configurations are more probable. An equation can then be derived from this general expression, in order to relate it to the geometry and energy of a charge-dipole interaction.
Derivation of Rotationally-Averaged Charge-Dipole Interaction Potential
[edit | edit source]The orientationally averaged potential energy is the expectation value of the charge-dipole potential energy averaged over
Starting with the potential energy of a charge-dipole interaction
We let
This makes
The average over the dipole orientation using the expectation value in classical statistical mechanics is:
Note: When integrating over an angle, the variable of integration becomes
To solve this integral we must first use first order Taylor's series approximation because integrals of exponential of do not have analytical solutions.
The first order Taylor's series approximations is as follows:
Using the Taylor series with gives:
The integral now becomes:
Multiplying into the brackets will give:
All terms that do not depend on are constants and can be factored out of the integral. The terms can be expressed as 4 integrals:
We must use trigonometric integrals to solve each of these for integrals:
First:
Second:
Third:
Fourth:
Plugging each trigonometric solved integral back into the equation gives:
Finally replace with to give:
The Orientational Average of the Charge-Dipole Interaction Energy
The Lennard-Jones Potential
The Lennard-Jones potential describes the interactions of two neutral particles using a relatively simple mathematical model. Two neutral molecules feel both attractive and repulsive forces based on their relative proximity and polarizability. The sum of these forces gives rise to the Lennard-Jones potential, as seen below:
^{[1]}
where ε is the potential well depth, σ is the distance where the potential equals zero (also double the van der Waals radius of the atom), and R_{min} is the distance where the potential reaches a minimum, i.e. the equilibrium position of the two particles.
The relationship between the and is
Pauli Repulsion
[edit | edit source]The first half of the Lennard-Jones potential is Pauli-Repulsion. This occurs when two closed shell atoms of molecules come in close proximity to each other and their electron density distributions overlap. This causes high inter-electron repulsion and extremely short distances, inter-nuclei repulsion. This repulsion follows an exponential distribution of electron density:
where A and c are constants and r is the intermolecular distance. In a liquid, however, it is very unlikely that two particles will be at highly repulsive distances, and thus a simplified expression can be used by assuming the potential has a r^{-12} dependence (note that this high exponent means that the energy of repulsion drops off extremely fast as molecules separate). The resulting simple polynomial is as follows:
Where the C_{12} coefficient is defined as:
Buckingham^{[2]}
Further Reading
[edit | edit source]London Dispersion
[edit | edit source]The Second half of the Lennard-Jones potential is known as London dispersion, or induced Dipole-Dipole interaction. While a particular molecule may not normally have a dipole moment, at any one instant in time its electrons may be asymmetrically distributed, giving an instantaneous dipole. The strength of these instantaneous dipoles and thus the strength of the attractive force depends on the polarizability and ionization potential of the molecules. The ionization potential measures how strongly outer electrons are held to the atoms. The more polarizable the molecule, the more its electron density can distort, creating larger instantaneous dipoles. Much like with Pauli Repulsion, this force is dependent on a coefficient, C_{6}, and also decays as the molecules move further apart. In this case, the dependence is r^{-6}
Where α' is the polarizability, usually given as a volume and I is the ionization energy, usually given as electron volts. Lastly, the C_{6} constant can also be expressed by the variables as seen in the Lennard-Jones equation:
Combination Rules
[edit | edit source]In the case of two separate molecules interacting, a combination rule called the Lorentz-Berthelot combination rule can be imposed to create new σ and ε values. These values are arithmetic and geometric means respectively. For example, an Ar-Xe L-J plot will have intermediate σ and ε values between Ar-Ar and Xe-Xe. An example of this combination rule can be seen in the figure to the right.
Example
[edit | edit source]Calculate the intermolecular potential between two Argon (Ar) atoms separated by a distance of 4.0 Å (use ϵ=0.997 kJ/mol and σ=3.40 Å).
References
[edit | edit source]- ↑ Lennard-Jones, J. E. (1924), "On the Determination of Molecular Fields", Proc. R. Soc. Lond. A, 106 (738): 463–477, Bibcode:1924RSPSA.106..463J, doi:10.1098/rspa.1924.0082.
- ↑ R. A. Buckingham, The Classical Equation of State of Gaseous Helium, Neon and Argon, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 168 pp. 264-283 (1938)]
See Also
[edit | edit source]Wikipedia:Lennard-Jones potential Chemwiki: Lennard-Jones
Intramolecular Forces
Bond Stretches
[edit | edit source]Bond stretches are the alteration of the bond length within a molecule. Using water as an example we will explain the different forms of stretches. The two modes of stretching are symmetric; to which in the case of water the two oxygen hydrogen bonds lengthen and shorten at the same rate and asymmetric; to which they lengthen and shorten at opposite times. The energy associated with this motion can be described using the following equation:
We see a Hooke's law relationship where k is a constant for the bond length and the x^{2} term in the Hooke's law equation is replaced with the change in bond length subtracted from the equilibrium bond length. This will give an average change in bond length. This relationship will give us an accurate value for the energy of the bond length of the given molecule.
Bond Angle Bending
[edit | edit source]Bond angle energies are brought about from an alteration of the optimized bond angle to a less favorable conformation. This can be either a decrease in angle; the orbitals get closer to one another which will increase the potential energy, or increase in angle; the orbitals more away from the ideal overlap which give the molecule favorable orbital interactions to increase the stability. This will increase the potential energy by moving the molecule out of the low energy conformation. This energy relationship can be observed through the following equation:
We again see a hooks law relationship where k is a constant for the bond length and the x^{2} term in the Hooks law equation is replaced with the change in bond angle subtracted from the equilibrium bond angle. This will give an average change in bond angle. This relationship will give us an accurate value for the energy of the bond length of the given molecule.
Torsional Rotations
[edit | edit source]Torsional rotations, which can also be described as dihedral interactions come about when a system of 4 or more atoms form a molecule. This interaction relies upon the rotation central bond into a favorable orientation with the most stable overlap conformation. Some of these known conformations are called periplanar, anticlinal, gauche, and antiperiplaner, listed in order of highest to lowest energy. Observing Newman projections upon the energy we can see where the unfavorable confirmational interactions will lie. The methyl groups here will be considered large and have unfavorable overlap with one another. Mathematically this can be represented using the following equation:
Where k is the rotational barrier of the system, n is the periodicity as the rotations repeat around 360 degrees, ɸ is the dihedral angle, and δ is the offset of the function
Improper Torsion
[edit | edit source]An improper torsion is a special type of torsional potential that is designed to enforce the planarity of a set of atoms (ijkl) where i is the index of the central atom. The torsional potential is defined as the angle between the planes defined by atoms ijk and jkl.
Internal Non-Bonded Interactions
[edit | edit source]The atoms of a molecule will also have intramolecular steric, dispersive, and electrostatic interactions with each other. These interactions within a molecule determine its conformation. The steric repulsion as well as Coulombic and dispersive interactions all together in the conformation of the molecule. Every atom in the system can be assigned a partial charge (). These interactions are calculated using standard Lennard-Jones and Coulombic pairwise interaction terms,
Here we see the combination of the Leonard Jones equation as well as Coulomb's law. With this relationship we can calculate the force resulting between the non bonding intermolecular forces. It should be noted at this point that interactions between bonded or atoms forming angles are excluded from this relationship. These interactions have been included though the calculations in the previous sections.
Molecular Mechanical Force Fields
General Form of Potential Energy Function
[edit | edit source]The overall energy of a system is based on an enormous number of factors, and it is the sum of these individual attractive and repulsive forces between all the atoms in the system that results in a final "force field". The potential energy for a set of atomic coordinates (r) is the sum of bonded (ie intermolecular forces) and non-bonded (i.e., repulsion/dispersion and electrostatics) terms. This sum ultimately defines the molecular mechanical force field.
Due to this complexity, calculating the potential energy for a set of molecules using a force field would require a complete set of known parameters, which also require several different methods of determination. These methods are outlined in this section:
Interaction | Term type | Expression | Parameters |
---|---|---|---|
Bonds | Bonded | spring constant (k_{bond}), equilibrium length (r_{e}) | |
Angles | Bonded | spring constant (k_{angle}), equilibrium angle (θ_{e}) | |
Dihedrals | Bonded | barrier height (k_{φ}), periodicity (n), offset (δ) | |
Repulsion/Dispersion | Non-bonded | atomic radii (σ), well depth (ε) | |
Electrostatics | Non-bonded | partial atomic charges (q) |
Parameterization
[edit | edit source]Empirical Parameters
[edit | edit source]For small molecules, parameters may be defined to fit experimental properties. Liquids will have different bulk physical properties for each set of non-bonded parameters. Each set of parameters in the force field will result in a different set of physical properties, the combinations of which can be tested until properly representing the properties of the liquid. An example would be determining the proper model for bulk water assuming no Lennard-Jones interactions for hydrogen atoms. Parameters would be tested together until the model properly represented the expected characteristics of bulk water.
Calculating Electrostatic Interactions
[edit | edit source]Starting with the non-bonded terms, we can look at electrostatics. Every intermolecular pair of atoms has this electrostatic interaction, and can be summed up in a form comparing all atoms in one molecule (A) to all atoms in another molecule (B). This quickly adds up to a massive sum even when considering systems of a few complex molecules:
In regards to partial charges, as there is no rigorous way to translate charge distribution to point charges, certain constraints are formed: sum of partial charges should equal formal charge of molecule (seen below), and equivalent atoms should carry the same charge. Charges can be chosen to match gas phase dipole moment, be treated as free parameters and adjusted for target properties, and to fit quantum mechanical electrostatics:
Charge fitting schemes fit partial charges to the quantum mechanical electrostatic potential (the potential energy a unit point charge has at position in space r). Potential can be calculated at a set of points around the molecule to generate electrostatic potential surface (ie: can map out the electronic density around the molecule and determine the localized area of charges):
Target Electrostatic Potential Data from Quantum Chemical Calculations
[edit | edit source]First term represents electron density, second term represents nuclear charges at positions R_{i}:
Molecular Mechanical Partial Atomic Charges
[edit | edit source]Looks at partial atomic charges at positions R_{i}:
In terms of electrostatic charges, point charges (q) must be defined as such that ≈ as closely as possible, thereby fitting both models.
Restraining terms can be introduced to give more practical models of molecules (known as Restrained Electronic Potential (RESP) fitting). Such restraints limits charges to moderate and chemically relevant values. Restraints are also placed to ensure chemically equivalent groups retain the same charges. These groups are given an atom type distinguished by element, hybridization and bonding partners, and parameters are defined for each type. For example, all aromatic carbons part of the same benzene ring should have the same parameters.
Generalized Amber Force Field (GAFF)
[edit | edit source]GAFF is a commonly used force field designed for reasonable parameters for simulations of small organic molecules. GAFF involves the use of the RESP method for assigning partial charges, and atom type defines the Lennard-Jones parameters and internal bonding terms that will be associated with the atom. Atom type definitions can be extremely specific (e.g., H on aliphatic C with 3 electron-withdrawing groups; inner sp2 C in conjugated ring systems).
Lennard-Jones Potential
[edit | edit source]A combination of the Pauli Repulsion and London dispersion attraction terms, the Lennard-Jones Potential is defined using two parameters: van der Waals radius (σ) and well depth (ε). These parameters are responsible for packing of molecules in liquids, and must compensate for errors in all other properties. van der Waals radius essentially sets the limit at which molecules may approach one another (potential energy > 0 when closer than σ). Well depth sets the energy well of the potential energy, occurring when distance between molecules is at R_{min} (ie radius related to minimum potential energy). First term represents Pauli repulsion, second term represents London dispersion attraction:
Bonding Interactions
[edit | edit source]Bond Stretch
[edit | edit source]Atoms within a molecule may vary their distances (stretch or compress), and this relationship can be described as a spring. This is why Hooke's Law can be applied to bond lengths to define the energy required to stretch/compress, according to spring constant (k_{bond}) and equilibrium length (r_{e}). Spring constant may be determined via the infrared vibrational frequency of the bond stretch (according to , while equilibrium bond lengths can be found through the use of rotational spectroscopy and crystallography.
Angle Stretch
[edit | edit source]The potential energy of bending a bond angle can also be approximated by Hooke's law. Hooke's Law can again be applied to bond angles to define the energy required to stretch/compress, according to spring constant (k_{angle}) and equilibrium angle (θ_{e}). Again, the spring constant may be determined via the infrared vibrational frequency of the bond stretch, following the same formula, while the equilibrium angle can be found through the use of rotational spectroscopy and crystallography.
Dihedral Parameters
[edit | edit source]Also known as torsional rotations, dihedral relationships are produced with 4 atoms. This relates to how the central bond is rotated and the overlap of the outer two atoms (designated as 1,4 interactions, with 2 and 3 being the central atoms). These overlaps may result in a higher or lower potential energy depending on confirmation (i.e. gauche, trans, periplanar, etc), and the relationship is dependent on barrier height (k_{φ}) (energy required to rotate the central axis), periodicity (n) and offset (δ). Determination of these parameters can be done using quantum chemistry or experimental isomer populations.
Calculation of molecular forces using quantum chemistry
Intermolecular Interactions
[edit | edit source]Intermolecular interactions can be calculated using quantum chemical programs.
The fragments are the atoms or molecules that comprise the complex.
Basis Set Superposition Error
[edit | edit source]Many quantum chemical programs use atom centered basis functions to represent the atomic orbitals. For a given calculation, the number of basis functions used is fixed and finite. Inevitably, the description of the orbitals will be incomplete, in that a more accurate answer could be obtained if more basis functions were added.
When calculating intermolecular interactions, the calculation of the will have the basis functions from all the fragments. At close intermolecular contacts, the tails of these orbitals will extend close to the space of atoms in other fragments. In this way, the fragments will be stabilized by the presence of these additional basis functions, so the energy of the complex includes both the stabilization due to the intermolecular interaction and this "basis set superposition error."
Counterpoise Corrections
[edit | edit source]Intramolecular Interactions
[edit | edit source]Dihedral Potentials
[edit | edit source]
Statistical properties
Statistical thermodynamics describes physical descriptions according to probability distributions.
Probability Distributions
[edit | edit source]A probability distribution is a function that shows the likelihood of an outcome. In statistical mechanics, this generally means the probability of of a system being in a particular state.
Gaussian Distribution
[edit | edit source]A Gaussian distribution, also called a normal distribution is a function with a bell shaped distribution. Any random variable with this normal probability density
is called a normal variable with mean and variance ^{[1]}
The Boltzmann Distribution is one such probability distribution that gives the probability distribution as a function of a states energy and temperature if a system .
^{[2]}
As shown below physical properties of a system can be calculated using this Boltzmann distribution.
Conformational Distribution
[edit | edit source]Within solids, liquids, and gases, atoms can take on different orientations and arrangements. Conformation changes within a molecule can occur by rotations around bonds. For example, gauche or eclipsed conformations, trans or cis and any conformation in between these would all fall under the distribution of conformation.
Calculating Physical Properties
[edit | edit source]Macroscopic properties are the average arrangement a system take son over time. Assuming all possible energy levels are continuous, we can employ classical mechanics to calculate physical properties. This assumption is only valid when particles and heavy and forces are relatively soft.
Averages
[edit | edit source]The ensemble average in statistical mechanics refers to the mean of all possible states of a system as given by its probability distribution. It is dependant on the ensemble chosen (i.e canonical, microcanonical etc...).
The expectation value gives the average value when any physical property of a system is measured. depending on the type of distribution, it may not be the most probable value but is the probability weighted value which we expect to measure. In a classical system, the expectation value is an integral over all of the possible configurations. The possible configurations are integrated over the interval .
where represents the property being calculated, represents the potential energy, represents the Boltzmann constant, and represents the temperature of the system.
In classical statistical mechanics, the classical ensemble average is the normalized Boltzmann weighted integral over all phase space.
Where is the Hamiltonian for the described system. This expression can be used to find many physical properties such as the average energy of a single particle system and many-body systems by integrating over spatial coordinates ( and integrating over the Maxwell distribution (
In a quantum mechanical system, the expectation value is the Boltzmann weighted sum over energy levels.
Example: Conformationally Averaged Dipole Moment
[edit | edit source]Conformational averaging using the Boltzmann distribution gives a way of finding the average dipole moment, as described in previous section.
The dipole moment of a molecule changes as its conformation changes as is it the vector sum of all bond moment and therefore the observed (or 'expected') value is a linear combination of the two conformations. For example, in 1,2-dichloroethane the trans conformation is non-polar but the cis conformation is polar. Since the molecular dipole is a vector quantity, the conformationally-averaged dipole moment is the average of the square of the individual dipole moments.
Using this formula, the conformationally averaged dipole moment for a molecule like 1,2 dichloroethane where the trans conformation has no dipole and the cis does can be calculated.
Variance
[edit | edit source]Variance is value that indicates how widely spread a set of data is when compared to the average. The variance is the expectation of the square of the standard deviation of a given set of data. If the standard deviation or variance is low, the data is close to the expected value. The variation of , a random variable, is the expected value of the square of the standard deviation from the mean. represents the expectation and represents the mean. Variance can be represented by or .
References
[edit | edit source]
Thermodynamic ensembles
The entire universe is governed by the laws of thermodynamics through the transfer of energy between matter. This process is too complex to consider directly, instead we consider several simplifications where part of the universe (i.e., the system) is considered separately from the rest of its surroundings. The system houses the process or thermodynamic state under consideration, whereas the surroundings contain everything else.
The system can be described using an ensemble, also known as a statistical ensemble, which is an idealization consisting a large number of copies of a system, considered all at once, each of which represents a possible state that the real system might be in. A thermodynamic ensemble is a statistical ensemble that is in statistical equilibrium. It provides a way to derive the properties of a real thermodynamic systems from the laws of classical and quantum mechanics.
We can consider different systems with different degrees of separation from their surroundings, from completely isolated (e.g., microcanonical ensemble) to completely open to its surroundings (e.g., grand canonical ensemble). However, for the purposes of molecular simulations only three main ensembles will be considered. Namely, the microcanonical, canonical, and isothermal-isobaric ensembles
Microcanonical ensemble
[edit | edit source]The microcanonical ensemble is a system that is completely isolated from its surroundings. There is no transfer of energy or matter between the system and the surroundings, and the volume of the system remains fixed. In other words, this is a physical system where the total number of particles (N), the total volume (V), and the total energy (E) are constant. The microcanonical ensemble is typically abbreviated NVE.
The total energy of the system is constant, but there is no defined temperature of the microcanonical ensemble. Temperature can only be defined for systems interacting with their surroundings. Isolated systems, such as microcanonical, are only described in terms of their composition, volume, and total energy. In statistical mechanics, a microcanonical ensemble is used to represent the possible states of a system which have a specified total energy.
Canonical ensemble
[edit | edit source]In the canonical ensemble, the volume of the system is fixed, and energy can transfer across the boundary between system and surroundings, but matter cannot. We can describe systems as being immersed in a heat bath at a temperature (T), where the heat bath is much larger than the system. No amount of heat put off by the system will significantly increase the temperature of the surroundings. The canonical ensemble applies to systems of any size; while it is necessary to assume that the relative size of the heat bath is very large, the system itself may be small or large. Now because the system and surroundings are in thermal contact, the system will transfer heat (q) to and from the surroundings until they are in thermal equilibrium. Therefore, unlike the microcanonical ensemble, the temperature of the canonical ensemble can be a defined constant (T). This ensemble is typically abbreviated in terms of constant number of particles, total volume, and temperature (NVT).
The canonical ensemble is a statistical mechanics ensemble that represents the possible states of a mechanical system in thermal equilibrium with a heat bath at some fixed temperature. The system can exchange energy with the heat bath, so that the states of the system will differ only in total energy. In the microcanonical ensemble, the total energy is fixed, but in the canonical ensemble the energy is no longer constant. It can take on a range of values depending on the temperature. The canonical ensemble is important when attempting to describe the Helmholtz free energy of a system, which is the maximum amount of work a system can do at a constant volume (V) and temperature (T).
Isothermal-Isobaric ensemble
[edit | edit source]In the isothermal-isobaric ensemble, energy can transfer across the boundary, but matter cannot. The volume of the system can change such that the internal pressure of the system matches the pressure exerted on the system by its surroundings. Similar to the canonical ensemble, we can describe the isothermal-isobaric ensemble as a system immersed in a heat bath at a temperature (T), where the heat bath is much larger than the system. No amount of heat put off by the system will significantly increase the temperature of the surroundings.
The isothermal-isobaric ensemble is a statistical mechanical ensemble that maintains a constant total number of particles, and constant temperature (T) and pressure (p), typically abbreviated NpT. This ensemble plays an important role in chemistry since the majority of important chemical reactions are carried out under constant pressure conditions. The isothermal-isobaric ensemble is also important when attempting to describe the Gibbs free energy of a system, which is the maximum amount of work a system can do at constant pressure (p) and temperature (T).
Phase space
The Hamiltonian
[edit | edit source]The Hamiltonian, , is a function that calculates the sum of the potential and kinetic energy of a system. For the systems considered here, the Hamiltonian is a function of the positions (r) and momenta (p) of the particles in the system. The potential energy, , depends only on the positions of the atoms and the kinetic energy, , depends only on the momentum of the particles [1]. Therefore, the Hamiltonian can be separated into a potential energy term that depends only on the positions of the particles () and a kinetic energy term that depends only on the momenta of the particles (),
The microcanonical ensemble is the statistical ensemble used to represent the isolated system, where the number of particles (N), the volume (V), and the total energy (E) are constant.
Phase Space
[edit | edit source]The unique positions and momenta of each particle, for a system of N particles, can be accurately described by a 6 N dimensional spaced called phase space. Each particle in a system has three unique position variables ranging from 0 to L and three unique momentum variables ranging from to [1]. These 6N numbers constitute the microscopic state of the system at time t [2,3]. A system of N particles corresponds to a phase space with 6 N variables, which describes every conceivable position and momentum combination of all particles in a mechanical system [1,2,3].
Phase Point
[edit | edit source]A phase point refers to any one point in an N-body system at any time t [2]. The dynamics of the phase point are fully described by the motion and trajectory of the phase point as it travels through phase space [2]. Every classical state can be considered to be equally probable as N, V, and E are held constant [3].
Example
[edit | edit source]Consider a system containing two particles of Argon (labelled 1 and 2, respectively), where N = 2. Since phase space is described by 6 N dimensions the total number of spatial variables becomes 6(2) = 12.
- r_{1}, position of particle 1:
- r_{2}, position of particle 2:
- p_{1}, momentum of particle 1:
- p_{2}, momentum of particle 2:
Argon is a noble gas and inert to intermolecular interactions with other particles in a system. Accordingly, the two Argon atoms in this hypothetical system are non-interacting. When atoms are assumed to be non-interacting, potential energy interactions between particles are no longer accounted for and = 0.
The Hamiltonian becomes the sum of the kinetic energies of the two particles:
The Classical Partition Function
[edit | edit source]In quantum mechanical systems, the system must occupy a set of discrete (quantized) states. These states correspond to the energy levels of the system, labelled A partition function can be defined a sum over these states.
Where is the energy of state , is the Boltzmann constant and is the temperature.
Explicit sums are practical when the energy states of molecules are independent and there are no intermolecular interactions. In systems where intermolecular interactions are significant, it is generally more practical to approximate the system as existing in terms of continuous variables. The states are approximated to be continuous and can be integrated with respect to the phase space of the system while simultaneously sampling every state of the system. This leads to the classical partition function:
Many Body Systems
[edit | edit source]The expectation values of a classical system can be evaluated by integrating over phase space. The classical partition function is an integral over 6N phase space variables:
References
[edit | edit source]- McQuarrie, Donald A. 1973. Statistical Thermodynamics. New York, N.Y. Harper & Row, Publishers, Inc. Pages 117-118.
- Tuckerman, Mark E. 2010. Statistical Mechanics: Theory and Molecular Simulation. Oxford England. New York: Oxford University Press. Pages 2-5.
Dilute gases
Ideal Gas Law
[edit | edit source]Gasses are often described using ideal gas law, which relates the pressure of a gas to its density with a simple expression.
Where is the density of the gas, is the pressure of the gas, is the Boltzmann constant, equal to , and is the temperature in Kelvin. The expression is derived from approximating a gas as point masses that have only kinetic energy and experience perfectly elastic collisions. Unfortunately, this theory breaks down at densities where intermolecular forces become significant, that is, when the potential energy is non-zero. Ideal gas law is suitable for only very dilute gasses.
Virial Theorem
[edit | edit source]To more accurately describe the properties of dilute gasses the Virial equation of state is used. Virial theorem accounts for the effects of intermolecular forces, through an expansion into higher order functions of the density. Mathematically, this is described using infinite power series, where and are the second and third Virial coefficients.
At low densities the deviations from ideal gas behavior can be sufficiently described in the second Virial coefficient, .
Where is the volume of a gas and is the configurational integral. The configurational integral for the second Virial coefficient is the contribution of every possible pair of positions weighted over its Boltzmann distribution. For the third Virial coefficient the configurational integral would be the contribution of three interacting particles. By extension, the configurational integral for any nth Virial coefficient would be the contribution of n particles interacting. For this reason higher Virial coefficients are quite complicated to derive, fortunately they are only necessary for describing gasses at pressures above 10atm^{[3]}. The configurational integral for two interacting particles is as follows:
Where is the potential energy of the interaction of a single pair of particles, and and are the positions of particles 1 and 2. The second virial coefficient can be written in terms of pairwise intermolecular interaction potential, , if the position of particle 2 is defined relative to the position of particle 1. The distance, , would then be the distance between the two interacting particles. The equation derived from this modification is as follows:
A different is derived for the hard sphere potential and the Lennard-Jones potential.
Hard Sphere Potential
[edit | edit source]The hard sphere model approximates particles as hard spheres that cannot overlap, if the spheres are not overlapping then the potential energy is zero and if they are overlapping then the potential energy is infinitely high. This approximation represents the very strong short range Pauli repulsion forces. The equation for the potential energy is as follows:
Where is the potential energy and is the radius of the hard sphere. Integrating the configurational integral for the hard sphere potential gives, , as the second Virial coefficient. This model is crude and only accounts for repulsive forces, a slightly more accurate model would be the Lennard-Jones potential model.
Lennard-Jones Potential
[edit | edit source]The Lennard-Jones potential is a combination of a polynomial repulsion term, , and a London dispersion attractive term, .
The and terms can be expanded, and internal energy, , can be expressed as:
Where is the potential well depth, is the intercept, and is the distance between the particles. The second Virial coefficient derived from the Lennard-Jones potential has no analytical solution and must be solved numerically.
The Lennard-Jones model is a more accurate than the hard sphere model as it accounts for attractive interactions and the repulsive term is more realistic than hard sphere repulsion. That being said, it is still limited in the fact that only London dispersion attractive interactions are considered, making the model applicable to only nobel gasses.
References
[edit | edit source]
Potential of mean force
The potential of mean force is one of the 3 major forces considered in stochastic dynamics models. The potential of mean force provides a free energy profile along a preferred coordinate, be it a geometric or energetic coordinate, such as the distance between two atoms or the torsional angle of a molecule. This free energy profile describes the average force of all possible configurations of a given system (the ensemble average of the force) on particles of interest.. The potential of mean force can be determined in both Monte Carlo Simulations as well as Molecular Dynamics.
Relation to Radial Distribution Function
[edit | edit source]For a liquid, the potential of mean force is related to the radial distribution function,
The potential of mean force can be calculated directly by using histogram analysis of a trajectory from an MD or MC simulation. This analysis calculates the probability of each possible configuration, and determines the free energy change associated with that state. In systems where parts of the coordinate of interest will not be sufficiently sampled from a naive simulation, umbrella sampling, free energy perturbation, or other enhanced sampling techniques can be used.
Derivation
[edit | edit source]By integrating this function over 2 distance values, the reversible work associated with moving the two particles can be found. This work is equal to the change in Helmholtz energy associated with the change in the two states.
Sources
[edit | edit source]1. Chandler, D. Introduction To Modern Statistical Mechanics, Oxford University Press, 1987, QC174.8.C47
2. Tuckerman, M. Statistical Mechanics: Theory and Molecular Simulation, Oxford University Press, 2010
See Also
[edit | edit source]
Langevin dynamics
Langevin dynamics is used to describe the acceleration of a particle in a liquid.
- ^{[4]}
The term term corresponds to the drag force divided by mass, m, of the particle. The drag force of the system is based on weak, long ranged intermolecular forces between atoms or molecules. This drag force is created by the liquid surrounding a particle. The drag force is based on the frictional constant of the system, , and the velocity of the particle, v. is . The frictional constant is proportional to the viscosity of the surrounding liquid, and the radius of the particle found from Stokes' Law.^{[4]}
The term represents the random force. The random force of the system is based on short, strong Pauli repulsion between atoms or molecules. The random force in the liquid is experienced as a result of the collisions of a molecule with the surrounding liquid. This force changes rapidly with time, and changes much faster than it's velocity in denser liquids.^{[4]}
Velocity of the Particle
[edit | edit source](1) is a first order linear differential equation and can be solved formally to give (2).
- ^{[4]}
The ensemble average of velocity is,
This is because the acceleration of the particle is due to a random force, and so the average acceleration will be 0 (i.e., ).
At short time intervals ( ), the average velocity becomes,
At long time intervals ( ), the average velocity becomes,
Displacement of the Particle
[edit | edit source]The Langevin Equation is used to express the rate of change of a particle's velocity. This in turn can be used to calculate the diffusion as diffusion depends on the velocity of a particle in a liquid. The Langevin equation can be used to sample the canonical ensembles of states. By integrating the velocity from time 0 to time t, the displacement of a particle can be found.^{[4]}
Substituting the definition for velocity from (2) into (3),and integrating by parts gives
Squaring both sides of (4) and taking the ensemble average gives
- ^{[4]}
The derivation is as follows,
is a rapidly varying function of only and it is non zero only when is small. So can be re-written as .^{[4]} Let and , and the above equation becomes,
Let , and simplify, the above equation becomes
Equipartition theory applies when , and so , and the above equation becomes (5).^{[4]}
At short time intervals () the part of the equation representing Brownian motion goes to zero ( i.e.
Let . Then,
This corresponds to ballistic motion of the particle.^{[4]} At short time scales, the effect of friction and collisions have not affected the movement of the particle.
At long time intervals (), the velocity of the particle goes to zero.
At large values of t and so,
The diffusion constant D, is equal to .^{[4]}
This corresponds to the particle undergoing a random walk.^{[4]}
References
[edit | edit source]- ↑ Rozanov Y.A. (2015) Probability Theory: A concise Course, Dover Publications, USA
- ↑ McQuarrie, A. (2000) Statistical Mechanics, University Science Books, California
- ↑ McQuarrie, D. A. Statistical thermodynamics; University Science Books: Mill Valley, CA, 1997.
- ↑ ^{a} ^{b} ^{c} ^{d} ^{e} ^{f} ^{g} ^{h} ^{i} ^{j} ^{k} McQuarrie, Donald (2000). Statistical Mechanics. Harper and Row. p. 452. ISBN 06-044366-9.
{{cite book}}
: Check|isbn=
value: length (help)
Periodic Boundary Conditions
Macroscopic systems are extremely large and are, therefore, expensive to compute by molecular simulations. For instance, one gram of water has about 3 x 10^{22} molecules, a number too large to be calculated, even on computers. Fortunately, periodic boundary conditions enable us to mimic an infinite system by treating a relatively small part of a system to achieve a reasonable representation of the infinite system. The particles of this small subsystem are controlled by a set of boundary conditions called a unit cell (e.g., a three-dimensional box). During the simulation, particles are free to move in the central (original) cell; therefore, their periodic images of the adjacent cells move in an identical way. This means any particle that crosses one boundary of the cell, will reappear on the opposite side.
Non-Bonded Interactions Under Periodic Boundary Conditions
[edit | edit source]A particle in a periodic cell has non-bonded interactions with the other particles both in the original cell, and also in the surrounding cells (the periodic images of the original cell). The non-bonded potential energy () can be written as:
where , and is a translational vector to different cell images. For a periodic cubic cell of length , becomes
where , , and are vectors of integers. The number of possible translational vectors, , are infinite. This will lead us to an infinite number of non-bonded interactions. Thus, we need to perform some approximations to deal with this problem.
The first term of the is the Lennard-Jones potential. This potential has an attractive component () for London dispersion, which is a short range interaction. Accordingly, if the distance between interacting particles increases (i.e., r > 10 Å), the interaction becomes very weak. Thus, the interactions can be truncated for distances greater than 10 A. However, the potential will lose its continuity because of the unexpected truncation. This problem can be fixed by employing a switching function, which will smoothly scale down van der Waals interactions to zero at the cutoff distance. For faster computation, pair lists are designed to search for particles that could be inside the cutoff area within a specified range ( is ~2 Å larger than ), and, during the simulation, lists are updated.
The second term of the represents the long rang () electrostatic potential, which cannot be truncated the same way as the Lennard-Jones potential. However, the particle mesh Ewald method could be used for treating the long range interaction part.
Minimum Image Convention
[edit | edit source]The periodic boundary conditions use the minimum image convention to calculate distances between particles in the system. Let us consider that we have four particles (, , , and ) in a cubic box of length . Then, the potential interactions will be calculated between the particle () and the nearest periodic images of the other particles (, , and ). The minimum image convention was first used by Metropolis and coworkers.^{[1]}
Periodic Boundary Conditions with NAMD
[edit | edit source]NAMD software requires three cell basis vectors to provide the periodic cell its size and shape: cellBasisVector1, cellBasisVector2, and cellBasisVector3. Each vector is perpendicular to the other two. The coordinates of the center of the periodic cell can be specified by the “cellOrigin” command. In addition, the “wrapAll on” command is usually used to ensure that any particle that crosses the boundary of the original cell will be imaged back into the cell.
References
[edit | edit source]- ↑ Metropolis, N.; Rosenbluth, A. W.; Rosenbluth, M. N.; Teller, A. H; Teller, E. J (1953). "Equation of State Calculations by Fast
Computing Machines". The Journal of Chemical Physics. 21: 1078–1092.
{{cite journal}}
: Cite has empty unknown parameter:|1=
(help); line feed character in|title=
at position 39 (help)
Further Reading
[edit | edit source]Allen, M. P.; Tildesley, D. J. Computer Simulation of Liquids; Oxford University Press: New York, 1989.
Tuckerman, M. Statistical Mechanics: Theory and Molecular Simulation; Oxford University Press: New York, 2010.
Treatment of Long Range Forces
Treatment of long-range interactions in molecular simulations poses a serious challenge in terms of computational requirements. By employing periodic boundary conditions, bulk properties of a sample may be calculated efficiently using a limited number of molecules within a finite cell. Each molecule in the cell interacts with every other molecule, both within the same cell and within the other infinite periodic cell images. These interactions include dispersion, repulsion, and any other electrostatic interactions. Essentially, this means that there are an infinite number of intermolecular interactions that must be calculated during the simulation, which is a problem in practical computation.
As is common in molecular simulation, the Lennard-Jones potential is used to describe both dispersion and repulsion interactions. These interactions, relative to other electrostatic interactions, decay fairly quickly, and can thus be dealt with fairly easily in computation. Electrostatic interactions (such as charge-dipole, dipole-dipole, quadrupole-quadrupole, etc.) are often significantly stronger at a given distance , and decay at a much slower rate as increases, with the rate of decay increasing as the exponent of increases. This makes their treatment somewhat more complicated.
Cutoffs
[edit | edit source]Non-bonded cutoffs are employed so as to save computational resources without sacrificing validity of results. These cutoffs are used to model long-range interactions between atoms or molecules in a simulation, and essentially truncate the interactions at a specific distance. Because intermolecular interaction potentials converge to a particular value (namely zero) as , the interaction potential can effectively be tuned to this value after a certain distance. In the case of dispersion and repulsion interactions, which are modelled by the Lennard-Jones potential, a switching function, , is employed in the simulation. This switching function causes the decay of the interaction to reach zero by a desired cutoff distance, at which point the two molecules or atoms involved in the intermolecular interaction do not interact through the Lennard-Jones potential at all,
The intermolecular distance at which the switching function is turned on is the switch distance, which is generally about 2 Å less than the cutoff distance. In between the switch distance and the cutoff distance, the interaction potential is gradually scaled to zero by the switching function. This gradual change in interaction potential maintains the physical basis of the simulation without using an abrupt, non-physical cutoff of interactions (as can be seen in the hard-sphere model for repulsive interactions). This method avoids a discontinuity in the interaction potential that would be caused by an abrupt truncation of the interaction potential function.
Such a convenient treatment of long-range interactions is easily applied to dispersion and repulsion interactions modelled by the Lennard-Jones potential, but is not applicable in all cases. For electrostatic interactions, other methods (such as Ewald summation, see below) must be employed. This is because electrostatic interactions are typically much longer-range than dispersion and repulsion interactions, and truncation through the use of simple switching functions could result in a serious decrease in accuracy of results. These interactions are stronger than dispersion or repulsion, and must be treated more carefully.
Ewald Decomposition
[edit | edit source]While truncation of the Lennard-Jones potential for dispersion and repulsion interactions is made relatively simple through the use of switching functions and non-bonded cutoffs, dealing with long-range electrostatic interactions is less straightforward, and requires a different approach. Evaluation of these interactions is commonly performed using the Ewald method, which divides electrostatic interactions into short-range and long-range components. The short-range component quickly decays to zero in real space, thus it can be truncated using a switching function, as was the case for the Lennard-Jones potential. The long-range component is more difficult to deal with, though it can be calculated efficiently in reciprocal space. In the Ewald summation method, the Gauss error function (erf) and complementary error function (erfc) are employed.^{[1]} These functions can be seen as
and
with and . These functions are thus complementary. Also, note that
and
thus, by definition,
- .
In using these Gauss functions, the term of the Coulombic interaction potential is broken down into the aforementioned short- and long-range terms,
where is a tunable constant chosen to ensure the electrostatic interaction decays to zero at the desired value. In this case, the term corresponds to the short-range interaction potential, which decays quickly to zero through evaluation in real space, while the term corresponds to the long-range interaction potential, which is more easily evaluated in reciprocal space.^{[2]} This potential is evaluated as a sum of all possible charge-charge interactions across the infinite periodic images of the simulation cell. In reciprocal space, these interactions converge quickly, allowing efficient computation through application of fast Fourier transforms.^{[1]} The sum of all charge-charge interactions at long range throughout all periodic images of the simulation system can be represented as
where is the distance vector between interacting charges and and is the translational vector between different periodic cell images. Equivalently, this sum can be written as a Fourier series,^{[1]}
where is a reciprocal space expansion coefficient and is a reciprocal space vector. Note that while this sum is still infinite, quickly as , allowing efficient computation of long-range electrostatic interactions through application of fast Fourier transforms.
Particle Mesh Ewald
[edit | edit source]The particle mesh Ewald method is a numerical approximation to Ewald summation.^{[3]} It involves a "smearing" of charges across a finite set of lattice points, located on a grid (or "mesh") that is placed over the simulation cell. By localizing charge density on this grid, the overall charge distribution across the simulation cell and its infinite periodic images is reduced to a more manageable form. This makes calculation of long-range electrostatic interactions significantly easier and less expensive, particularly in systems with a large simulation cell length, and thus, a large number of vectors. From the lattice of "smeared" charge densities, long-range interaction potentials can be calculated using a fast Fourier transform. While this approximation decreases the computational cost significantly, accuracy of collected data is still high. The particle mesh Ewald method is also relatively simple to integrate into a simulation.^{[4]} These factors combine to make it an extremely attractive method to calculate long-range electrostatic interaction potentials.
See Also
[edit | edit source]Wikipedia article on Ewald summation
Monte Carlo Methods
Monte Carlo Methods are stochastic techniques that use random numbers to sample conformation space. In this method, a random conformation is generated then it would be determined whether to reject or accept it.
A Monte Carlo simulation starts from a given conformation, then random numbers will generate a new trial conformation. This trial conformation will be determined whether to be accepted or rejected. If it is accepted, this conformation will become the current conformation and the next iteration will start from this conformation.
Monte Carlo Integration
[edit | edit source]Computing a definite integral is a simple illustrative example of using Monte Carlo method. Consider the simple function and its definite integral from 0 to 1:
In this case, the conformation space is two dimensional and random x and y values will be generated to make the new trial conformation. Only the trial points (a pair of x and y values) that are below the graph can be accepted. The number of accepted conformations divided by the total number of samples represents the probability for samples to be accepted. Multiplying this probability by the volume of the sampling space () is an approximation of the correct value of the integral.
A large number of samples will result a closer value to the expected value.
This method is usually used for higher-dimensional integrals.
Metropolis Monte Carlo Simulation
[edit | edit source]In molecular simulations, Metropolis Monte Carlo Algorithm is usually used to accept or reject the trial conformation. If the energy of the trial conformation is lower than or equal to the current energy, it will always be accepted. If the energy of the trial conformation is higher than that of the current energy, then it will be accepted with a probability determined by the Boltzmann distribution.
It means that, if , a random number () will determine whether to accept or reject this conformation.
Metropolis algorithm satisfies microscopic reversibility, meaning that the probability of being in a state and going to the next state is the same as being the new state and going back to the previous state.
Sampling
[edit | edit source]Configurational integral requires integration of Boltzmann distribution over all spatial variables. To employ Monte Carlo integration to estimate configurational integral (Z), a large number of configurations are randomly generated and Boltzmann weight is evaluated only at these points. With enough samples, configurational integral can be approximated.
But direct Monte Carlo sampling is extremely inefficient because a huge number of states must be sampled. Besides, there is an equal probability to sample high energy states as low energy states, but by Boltzmann distribution system is likely to be in a low energy state.
Metropolis algorithm changes the acceptance probability, so that the system samples low energy states preferentially.
Limitations of Monte Carlo Simulations
[edit | edit source]With Monte Carlo simulations, only an equilibrium distribution of states can be sampled. Moves are random and non-physical, so trajectories only corresponds to sequence of Monte Carlo moves and there is no temporal information, so dynamical properties like diffusion coefficients cannot be calculated from a Monte Carlo trajectory.
Equilibration
[edit | edit source]Metropolis Monte Carlo cycle favor transitions to lower energy and reach more probable structures. Many iterations of the metropolis Monte Carlo will move the system to an equilibrium state. Multiple properties can be plotted to determine when equilibration is complete.
Moving to equilibrium can be seen in a slow drift, equilibration is not complete until drift has stopped and properties fluctuate around average value.
Equilibrium properties
[edit | edit source]At equilibrium, the system is not undergoing a systematic change. There is no large scale gradient in physical properties of matter. If small perturbation is made to the system, the system will return to its equilibrium state. In equilibrium there is no momentum gradient, concentration gradient and temperature gradient in the system.
See Also
[edit | edit source]
Molecular Dynamics
Molecular dynamics simulation is a tool of simulating motions of the atoms of a many-body system. Molecular dynamics is used to compute the equilibrium and transport properties (viscosity, thermal conductivity, diffusion, reaction rate, protein folding time, structure and surface coating) of a classical system. To simulate the dynamics of molecules, classical Newtonian mechanics can be used. In practice, molecular dynamics simulations are performed by moving in small increments of time. It is the solution of the classical equations of motion for atoms and molecules to obtain the evolution of a system. Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically, so MD is applied to these systems to solve this problem by using numerical methods.
Verlet Algorithm
[edit | edit source]One popular algorithm that predict positions of atoms at a later point in time is Verlet algorithm.^{[5]} This algorithm approximates the position of particles at time (one time step in future from time t) by using a Taylor series approximation.
A Taylor series approximates a function at point x based on the values and derivatives of that function at another point a.
Derivation of the Verlet Algorithm
[edit | edit source]A Taylor series approximates an equation for the position of the molecule in .
First and second derivation of position correspond to velocity and acceleration, and acceleration is related to force through Newton’s second law.
Considering one time step backward in time:
Taking sum of positions at previous and next step: