Topological String Theory Methods of Computer-aided Drug Design

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Drug design is an important application of chemistry to the real world in treating diseases. In this wikibook, we are proposing a new neural network that can aid drug design. Relevant videos, papers and book chapters will be supplemented in the wikibook.

Scope of the book[edit]

The scheme of the neural network is as follows: we first represent each compound in the chemical compound space by a chemical graph. Then we convert the graph into a set of generalized twisted solid torus product links, parametrized by twist turns. We find BPS invariants of these links that come from topological string theory which can be obtained from HOMFLY-PT polynomials via Chern-Simons theory. The general HOMFLY-PT polynomials in are hard to find, but we can apply surgery techniques to find such polynomials in surgered manifolds. Then we design a neural network using these BPS invariants as topological indices of the compound. As these invariants are tensors, we need to find a scalar for them. This is achieved via generalizing the Ω-triangle in chromogeometry and the Richardson constant in rational trigonometry. Kernels or convolutions based on spread polynomials and algebraic calculus are then introduced. Then we briefly discuss activity and property parameters in chemistry that are particularly useful for drug design. The neural network can then be upgraded to a generative adversarial network with an autoencoder.


  1. Knots, HOMFLY-PT Polynomial, Chern-Simons Theory and Surgery
  2. Topological String Theory and BPS invariants
  3. Surgerized Twisted Solid Torus Links, Twisted Solid Torus Product Links and Generalizations
  4. Links Inspired by Molecules, Topological Indices, Computational Chemistry and Computer-aided Drug Design
  5. Rational Trigonometry and Chromogeometry, Ω-tetrahedra and Generalized Richardson Constant of Tensors
  6. Algebraic Calculus, Convoluted Neural Network and Backpropagation
  7. Autoencoder and Generative Adversarial Network
  8. Conclusions: Putting it all together