Structural Biochemistry/Molecular Modeling/Molecular Dynamics

From Wikibooks, open books for an open world
< Structural Biochemistry‎ | Molecular Modeling
Jump to: navigation, search

Theoretical Overview[edit]

Simple example of a molecular dynamics cage that shows the folding of a small amino acid chain, the tryptophan cage, in implicit solvent.

Classical molecular dynamics (MD) is a branch of computational chemistry that focuses on simulating the motion of particles according to Newton's Laws of Motion.[1] Atoms are approximated as "balls on springs," which allows for the application of the following laws:

Where is position, is velocity, is acceleration, and is time.

Classical MD approximations treat bond and angle potentials as harmonic, and dihedrals as periodic. This allows for expression of the energy of a system with an equation similar to the following used by the AMBER molecular dynamics program: [2]

Simulations are conducted in a series of "time steps." In each step, the force on each atom is calculated according to the derivative of the energy equation. The atom is moved a certain distance corresponding to the magnitude and direction of the force over the size of the time step. The process is repeated over many time steps, iteratively calculating forces and solving the equations of motion based on the accelerations from the forces, and produces results that mimic complex behavior from docking of pharmaceutical compounds to the folding of simple proteins.

A simplified view of the steps a molecular dynamics simulation will take. Production MD codes use more complicated versions of this algorithm that incorporate temperature and pressure control.


Quantum Effects[edit]

Although atoms are known to behave according to the Schroedinger equation rather than by Newton's Laws, classical methods can be used with reasonable accuracy. A good indication of this can be given by checking the De Broglie wavelength of the particles involved [3]:

where is the atomic mass and is the temperature. The classical approximation is justified when the mean nearest neighbor separation between particles. So for heavier systems, such as organic molecules, the classical approximation is reasonable, but for light systems such as H2, He, or Ne, the method loses considerable accuracy.

Since atoms are approximated as discrete particles, interactions of smaller particles cannot be correctly modeled with classical MD.[4] Making and breaking chemical bonds, noncovalent reaction intermediates, and proton or electron tunneling all cannot be simulated with this method. Simulations at very low temperature are also problematic, as near absolute zero classical mechanics are increasingly disfavored for quantum explanations. Quantum molecular dynamics methods exist for simulations that can be more electronically realistic, but at much greater computational cost.

System size and timescale[edit]

The number of atoms and amount of time that may be simulated is directly related to available computing power. Although the entirety of a small virus has been simulated before,[5] the majority of fine-grained, all-atom simulations that take place are on the scale of proteins rather than organelles or complex assemblies. This is because the length of time that is necessary to simulate to observe results directly relates to system size:

Table - Timescales of molecular processes[edit]

Secs name Process
10−1 Surface reorganizations
10−3 milliseconds Protein Folding
10−8 10 nanoseconds NMR
10−9 to 10+3 Brownian dynamics
10−9 to 100 Lattice Monte-Carlo
10−12 picoseconds Fast vibrations
10−15 femtoseconds Electronic excitation

So, the larger the system, the longer the simulation needs to be run to observe relevant behavior. Today, computer hardware specifically designed for molecular dynamics simulations[6] can run approximately 17 microseconds of a 23,000 atom system in one day with a 0.5 femtosecond timestep. On the same system, a general-purpose parallel supercomputer can obtain on the order of a few hundred nanoseconds of simulation time.


Using an approach based on Newton's equations of motion, a simulation is obtained in the "microcanonical ensemble", which is a constant energy system (nVE). This is unrealistic, however, and in order to produce accurate comparisons to experiment, simulations are most commonly done in the canonical ensemble, which features constant temperature (nVT).[7]This temperature must be maintained with an algorithm, which is called a thermostat. There are several thermostats that are commonly used, each of which has its advantages and disadvantages in terms of reliability, accuracy, and computational expense.

A typical simulation involves an MD equilibration phase, at which the structure is simulated at constant temperature for a certain amount of time at 0K, a heating phase where the system is gradually heated to the desired temperature (often 300K), and a final production simulation at that constant temperature. The initial equilibration is necessary not to move the structure to a chemical equilibrium but rather to an energetic minimum (which will usually be near the chemical equilibrium state). If the equilibration were omitted and the simulation run in a region with high energy, the simulation will continue to sample highly energetic phase space, producing results that will be inconsistent with experimental data.[8]

The Langevin Theromostat[edit]

This thermostat follows the Langevin equation of motion rather than Newton's, where a frictional force proportional to the velocity is added to the conservative force, which adjusts the kinetic energy of the particle so that the temperature is correct.[9]

The system is then described by the equation:

Where , and are mass, velocity, and acceleration of the particle. is a frictional constant, and is a random force that will add kinetic energy to the particle. It is sampled from a Gaussian distribution where the variance is a function of the set temperature and the time step. This balances the frictional force to maintain the temperature at the set value.

The Anderson Thermostat[edit]

This thermostat couples the system to a heat bath at the desired temperature. The bath is represented by collisions with a stocastic particle on randomly selected system particles.

It has been shown[10] that the addition of stochastic collisions to the Newtonian MD system results in the simulation being a Markov chain that is irreducible and aperiodic. This implies that the generated distribution with Anderson's algorithm is not a canonical distribution.

This thermostat will randomly affect velocities of particles, which results in nonphysical dynamics in the simulated system.

The Gaussian Thermostat[edit]

This thermostat strongly couples the system with a heat bath set to the desired temperature. Here, the change in momentum for each particle is represented as:[11]

This rescaling destroys dynamic correlations very quickly, and therefore does not maintain the canonical ensemble.

The Berendsen Thermostat[edit]

This thermostat, first presented in 1984[12] involves re-scaling of the velocities of each particle in a manner that controls the temperature.

Similar to the Gaussian thermostat, the system is coupled to a heat bath with the desired temperature, although this algorithm uses weak rather than strong coupling, where the temperature of the system is corrected such that the deviation in temperature decays exponentially with some time constant proportional to the timestep:

The Nose-Hoover Thermostat[edit]

This thermostat adds an extra term, to the Newtonian equations of motion, producing the following second-order differential:

Where is mass, is the desired temperature, and is the Boltzmann constant. The heat bath is given some mass, , which for Nose's suggested value of produces the above equation for the momentum change of each individual particle.

With this thermostat, the energy of the simulated system fluctuates, but the total energy of the system and heat bath is conserved. If the system is ergodic (all accessible microstates are equally probable), then this system gives the canonical ensemble.[13]

Force fields[edit]

Molecular dynamics simulations require a "force field," which lists the parameters involved in each type of interaction in the energy equation. For example, the AMBER force field requires parameters for bond equilibrium length, bond force constant for every possible combination of two atom types, angle equilibrium value, and angle force for every combination of three atom types, dihedral equilibrium phase, dihedral force constant, and dihedral periodicity for every combination of four atom types. Additional parameters also describe the nonbonded (electrostatic) forces for each atom as well.

The development and analysis of these parameters, which collectively are called a force field, is a focus of molecular dynamics research.

Force field development[edit]

Parameters for classical molecular dynamics ideally lead to behavior that closely approximates that of actual molecules on an atomistic level, which are in reality governed by quantum phenomena. The classical parameters are developed such that this is true- once an energy and force equation (such as the AMBER equation) has been made to establish the paradigm for the system, parameters are derived such that calculated values with them match those calculated with quantum mechanical methods or experiment. Due to the lack of availability of experimental data for the wide variety of molecules and molecular conformations necessary to develop a force field, data from quantum calculations on the systems are used instead.

Force fields are developed for specific classes of molecules. For example, AMBER provides the Amber ff99SB forcefield for proteins[14], the Glycam forcefield for carbohydrates,[15] and the newly developed lipid forcefield GAFFLipid.[16] In addition, a "General AMBER Force Field," or GAFF, has been developed for the parameterization of small ligands and other molecules that do not fit into the broad force field categories.

The accuracy of a force field is critical to simulation accuracy. Simple terms, such as dihedral force constants, can have an extremely large effect on overall simulations. In fact, inaccurate dihedral parameters for the protein backbone and angles produced simulations that were excessively disposed to alpha helix formation, a fact which affected numerous results.[17][18][19]

Historically, force field development has been complicated by the lack of availability of energy and force data to fit to. Force field parameters are generated by fitting the parameters such that the energy of a structure or the predicted forces on each atom predicted using the parameters matches the energy or forces generated by ab initio quantum calculations. These calculations are extremely expensive, and as a result considerable extrapolation is usually made from one set of parameters to a general one in order to avoid re-doing the calculations. For example, using parameters generated for the protein backbone of tetra-alanine as describing the backbone of any protein.

The availability of increasing computing power, more efficient algorithms, and knowledge of past methods proves promising for future parameter development. For example, side-chain parameters for various amino acids may be calculated separately and then added in to a general parameter set in order to characterize each amino acid individually.[20]

Force field accuracy[edit]

Classical force fields have been developed by fitting the classical parameters such that they give results consistent with experimental and quantum data. They are categorized by the type of biomolecule- protein, carbohydrate, and lipid forcefields have been developed separately for all major MD programs. However, this categorization is still quite broad, and can neglect system-specific interactions.

Comparison of the potential energy surface for the two backbone dihedrals of N-methylacetamide using quantum and classical methods of calculation. When the dihedral is near the equilibrium angle, the classical approximation will produce accurate results, however at greater distances from equilibrium the deviation in the energy predicted with the two methods grows considerably.

  1. Meller, Jarosaw. "Molecular Dynamics" Encyclopedia of Life Sciences, 2001
  2. R. Salomon-Ferrer, D.A. Case, R.C.Walker. An overview of the Amber biomolecular simulation package. WIREs Comput. Mol. Sci.(2012)
  3. J. P. Hansen and I. R. McDonald, Theory of Simple Liquids, 2nd Ed., Academic, 1986.
  4. Ercolessi, Furio. "A Molecular Dynamics Primer" University of Udine Spring College in Computational Physics, 1997
  5. Peter L. Freddolino, Anton S. Arkhipov, Steven B. Larson, Alexander McPherson, and Klaus Schulten. Molecular dynamics simulations of the complete satellite tobacco mosaic virus. Structure, 14:437-449, 2006.
  6. David E. Shaw, Martin M. Deneroff, Ron O. Dror, Jeffrey S. Kuskin, Richard H. Larson, John K. Salmon, Cliff Young, Brannon Batson, Kevin J. Bowers, Jack C. Chao, Michael P. Eastwood, Joseph Gagliardo, J.P. Grossman, C. Richard Ho, Douglas J. Ierardi, István Kolossváry, John L. Klepeis, Timothy Layman, Christine McLeavey, Mark A. Moraes, Rolf Mueller, Edward C. Priest, Yibing Shan, Jochen Spengler, Michael Theobald, Brian Towles, and Stanley C. Wang (July 2008). "Anton, A Special-Purpose Machine for Molecular Dynamics Simulation". Communications of the ACM (ACM) 51 (7): 91–97.
  7. Hunenberger, Phillipe. Thermostat Algorithms for Molecular Dynamics Simulations DOI:10.1007/b99427
  8. Finnerty, J. "Molecular Dynamics meets the physical world: Thermostats and Barostats"
  9. Adelman, S.A. and J.D. Doll, Generalized Langevin Equation Approach for Atom-Solid-Surface Scattering - General Formulation for Classical Scattering Off Harmonic Solids. Journal of Chemical Physics, 1976.
  10. E, W. and Li, D. (2008), The Andersen thermostat in molecular dynamics. Comm. Pure Appl. Math., 61: 96–136. doi: 10.1002/cpa.20198
  11. J. Clarke D. Brown. Mol. Phys., 51:1243, 1983.
  12. Berendsen, H.J.C., et al., Molecular-Dynamics with Coupling to an External Bath. Journal of Chemical Physics, 1984. 81(8): p. 3684-3690.
  13. Palmer, R. G. (1982). Advanced Physics 31: 669
  14. Lindorff‐Larsen, Kresten, et al. "Improved side‐chain torsion potentials for the Amber ff99SB protein force field." Proteins: Structure, Function, and Bioinformatics 78.8 (2010): 1950-1958.
  15. Kirschner, Karl N., et al. "GLYCAM06: a generalizable biomolecular force field. Carbohydrates." Journal of computational chemistry 29.4 (2007): 622-655.
  16. Dickson, C. J., Rosso, L., Betz, R. M., Walker, R. C., & Gould, I. R. (2012). GAFFlipid: a General Amber Force Field for the accurate molecular dynamics simulation of phospholipid. Soft Matter.
  17. Best, Robert B., Nicolae-Viorel Buchete, and Gerhard Hummer. "Are current molecular dynamics force fields too helical?." Biophysical journal 95.1 (2008): L07-L09.
  18. Freddolino, Peter L., et al. "Force field bias in protein folding simulations." Biophysical journal 96.9 (2009): 3772.
  19. Lindorff‐Larsen, Kresten, et al. "Improved side‐chain torsion potentials for the Amber ff99SB protein force field." Proteins: Structure, Function, and Bioinformatics 78.8 (2010): 1950-1958.
  20. Lindorff‐Larsen, Kresten, et al. "Improved side‐chain torsion potentials for the Amber ff99SB protein force field." Proteins: Structure, Function, and Bioinformatics 78.8 (2010): 1950-1958.