- Proceedings
- Open access
- Published:
Learning generative models of molecular dynamics
BMC Genomics volume 13, Article number: S5 (2012)
Abstract
We introduce three algorithms for learning generative models of molecular structures from molecular dynamics simulations. The first algorithm learns a Bayesian-optimal undirected probabilistic model over user-specified covariates (e.g., fluctuations, distances, angles, etc). L1 reg-ularization is used to ensure sparse models and thus reduce the risk of over-fitting the data. The topology of the resulting model reveals important couplings between different parts of the protein, thus aiding in the analysis of molecular motions. The generative nature of the model makes it well-suited to making predictions about the global effects of local structural changes (e.g., the binding of an allosteric regulator). Additionally, the model can be used to sample new conformations. The second algorithm learns a time-varying graphical model where the topology and parameters change smoothly along the trajectory, revealing the conformational sub-states. The last algorithm learns a Markov Chain over undirected graphical models which can be used to study and simulate kinetics. We demonstrate our algorithms on multiple molecular dynamics trajectories.
Introduction
The three dimensional structures of proteins and other molecules vary in time according to the laws of thermodynamics. Each molecule visits an ensemble of states which can be partitioned into distinct conformational sub-states [1, 2] consisting of similar structures. The study of these conformational sub-states remains an active area of research [3–5] and has provided valuable insights into biological function, such as enzyme catalysis [5–7] and energy transduction [8].
Molecular dynamics (MD) simulations are often used to characterize conformational dynamics [9]. These simulations are performed by numerically integrating Newton's laws of motion for a set of atoms. Conformational frames are written to disk into a trajectory for subsequent analysis. Until recently, MD simulations were limited to time-scales of several tens of nanoseconds (ns = 10-9 sec.). Recent advances in hardware and software (e.g., [10–14]) make it possible to investigate conformational dynamics on microsecond (μs = 10-6 sec.) and millisecond (ms = 10-3 sec.) time-scales. Such long simulations are especially well-suited to identifying and studying the conformational sub-states relevant to biological function. Unfortunately, the corresponding trajectories are often difficult to analyze and interpret due to their size and complexity. Thus, there is a need for algorithms for analyzing such long timescale trajectories. The primary goal of this paper is to introduce new algorithms to do so.
Our approach to analyzing MD data is to learn generative models known as Markov Random Fields (MRF). This is the first time MRFs have been used to model MD data. A MRF is an undirected probabilistic graphical model that encodes the joint probability distribution over a set of user-specified variables. In this paper those variables correspond to the positional fluctuations of the atoms, but the technique can be easily extended to other quantities, such as pairwise distances or angles. The generative nature of the model means that new conformations can be sampled and, perhaps more importantly, that users can make structural alterations to one part of the model (e.g., modeling the binding of a ligand) and then perform inference to predict how the rest of the system will respond.
We present three closely related algorithms. The first algorithm learns a single model from the data. Both the topology and the parameters of the model are learned. The topology of the learnt graph reveals which variables are directly coupled and which correlations are indirect. Alternative methods, such as constructing a covariance matrix cannot distinguish between direct and indirect correlations. Our algorithm is guaranteed to produce an optimal model. Regularization is used to reduce the tendency of over-fitting the data. The second algorithm learns a time-varying model where the topology and parameters of the MRF change smoothly over time. Time-varying models reveal the different conformational sub-states visited by the molecule and the features of the the energy barriers that separate them. The final algorithm learns a Markov Chain over MRFs which can be used to generate new trajectories and study to kinetics.
Background
Molecular dynamics simulation
Molecular Dynamics simulations involve integrating Newton's laws of motion for a set of atoms. Briefly, given a set of n atomic coordinates and the corresponding velocity vectors , MD updates the positions and velocities of each atom according to an energy potential. The updates are performed via numerical integration, resulting in a conformational trajectory. The size of the time step for the numerical integration is normally on the order of a 1-2 femtoseconds (fs = 10-15 sec), meaning that a 1 microsecond simulation requires one billion integration steps. In most circumstances, every 1000th to 10000th conformation is written to disc as an ordered series of frames.
Traditional methods for analyzing MD data either monitor the dynamics of global statistics (e.g., the radius of gyration, total energy, etc), or else identify sub-states via a clustering the frames [15–17] or through Principal Components Analysis (PCA) and closely related methods (e.g., [18–22]). Clustering based methods do not produce generative models and generally rely on pairwise comparisons between frames and thus run in quadratic time with respect to the number of frames in the trajectory. Our algorithms produce generative models and only perform linear work in the number of frames. This complexity difference is especially important for long timescale simulations. PCA-based methods implicitly assume that the data are drawn from a multivariate Gaussian distribution. Our method makes the same assumption but differs from PCA in two important ways. First, PCA projects the data onto an orthogonal basis. Our method involves no change of basis, making the resulting model easier to interpret. Second, we employ L 1 regularization when learning the parameters of our model. Regularization is a common strategy for reducing the tendency to over-fit data by, informally, penalizing overly complicated models. We use L 1 regularization because it has desirable statistical properties. Specifically, it leads to consistent models (that is, given enough data our algorithm learns the true model) while while enjoying high efficiency (that is, the number of samples needed to achieve the true model is small).
More recently, Lange and Grubmüller introduced full correlation analysis [23], which can capture both linear and non-linear correlated motions from MD simulations. The algorithms in this paper are limited to linear models, but we note that they can be easily extended to more complex forms by using non-Gaussian random variables (e.g., [24, 25]). Our final algorithm produces models that resemble Markov State Models (MSMs) [26] but are different in that they are fully generative.
Markov Random Fields
A Markov Random Field consists of an undirected graph over a set of random variables X = {X1, ..., X n } and a set of functions Θ over the nodes and edges of . Together, they define the joint distribution P(X). The topology of the graph determines the set of conditional independencies between the variables. In particular, the ith random variable is conditionally independent of the remaining variables, given its neighbors in the graph. Informally, if variables X i and X j are not connected by an edge in the graph, then any correlation between them is indirect. By 'indirect' we mean that the correlation between X i and X j (if any) can be explained in terms of a pathway of correlations (e.g., X i → X k → ··· → X j ). Conversely, if X i and X j are connected by an edge, then the correlation is direct. Our algorithm automatically detects these conditional independencies and learns the sparsest possible model, subject to fitting the data.
Gaussian Graphical Models
A Gaussian Graphical Model (GGM) or Gaussian Markov Random Field is simply a MRF where each variable is normally distributed. Thus, a GGM encodes a multivariate Gaussian distribution. A GGM has parameters where Σ-1 is an n × n matrix (known as the precision matrix) and is a n × 1 vector. The non-zero elements of Σ-1 reveal the edges in the MRF. The inverse of the precision matrix, denoted by Σ, is the covariance matrix for a multivariate Gaussian distribution with mean .
Gaussian distributions have a number of desirable properties including the availability of analytic expressions for a variety of quantities. For example, the probability of observing is:
where is the partition function and |Σ| denotes the determinant of Σ. Other quantities of interest can be computed as well, such as the free energy of the model, - ln Z, its differential entropy:
or the KL-divergence between two different models:
A GGM can also be used to manipulate a subset of variables and then then compute the marginal densities for the remaining variables. For example, let V ⊂ X be an arbitrary subset of variables X and let W be the complement set. We can condition the model by setting variables V to some particular value, . The marginal distribution over W given is a multivariate Gaussian with parameters where
Here, . Thus, inference can be performed analytically via matrix operations. In this way, users can predict the conformational changes induced by local perturbations or, more generally, study the couplings between arbitrarily chosen subsets of variables.
Algorithms
We now present three algorithms for learning various kinds of generative models from MD data.
Input The input to all three algorithms is a time-series of vectors where is a n × 1 vector of covariates (e.g., positional and/or angular deviations) and t is the number of snapshots in the MD trajectory.
Algorithm 1
Output The first algorithm produces a Gaussian Graphical Model . The first step is to compute the sample mean . Then it computes the regularized precision matrix Σ-1 (see below). Finally, is computed as follows: .
The algorithm produces the sparsest precision matrix that still fits the data (see below). It also guarantees that Σ-1 is positive-definite, which means it can be inverted to produce the regularized covariance matrix (as opposed to the sample covariance, which is trivial to compute). This is important because Eqs 1-3 require the covariance matrix, Σ. We further note that a sparse precision matrix does not imply that the corresponding covariance matrix is sparse, nor does a sparse covariance imply that the corresponding precision matrix is sparse. That is, our algorithm isn't equivalent to simply thresholding the sample covariance matrix, and then inverting.
Learning regularized precision matrices
A straight-forward way of learning a GGM is to find the parameters () that maximize the likelihood of the data (i.e., by finding parameters that maximize . It is known that a maximum likelihood model can be produced by setting the pair to the sample mean and covariance matrices, respectively. Unfortunately, maximum likelihood estimates can be prone to over-fitting. This is not surprising because the covariance matrix alone contains m = O(n2) parameters, each of which must be estimated from the data. This is relevant because the number of independent samples needed to obtain a statistically robust estimate of Σ grows polynomially in m. We note that while modern MD simulations do produce large numbers of samples (i.e., frames), these samples are not independent (because they form a time-series), and so the effective sample size is much smaller than the number of frames in the trajectory.
Our algorithm addresses the problem of over-fitting by maximizing the following objective function:
Here, ||Σ-1||1 is the L1 norm of the precision matrix. The L1 norm is defined as the sum of the absolute values of the matrix elements. It can be interpreted as a measure of the complexity of the model. In particular, each non-zero element of Σ-1 corresponds to a parameter in the model and must be estimated from the data. Thus, Eq. 6 establishes a tradeoff between the log likelihood of the data (the first term) and the complexity of the model (the second term). The scalar value λ controls this tradeoff such that higher values produce sparser precision matrices. This is our algorithm's only parameter. Its value can be computed analytically [27] from the number of frames in the trajectory and variables. Alternatively, users may elect to adjust λ to obtain precision matrices of desired sparsity.
Algorithmically, our algorithm maximizes Eq. 6 in an indirect fashion, by defining and then solving a convex optimization problem. Using the functional form of according to Eq. 1, the log-likelihood of Σ-1 can be rewritten as:
Noting that and that trace (ABC) = trace(CAB), the log-likelihood of Σ-1 can then be rewritten as:
Next, using the definition of the sample covariance matrix,
we can define the matrix Σ-1 that maximizes 6 as the solution to the following optimization problem:
We note that L1 regularization is equivalent to maximizing the likelihood under a Laplace prior and so the solution to Eq. 7 is a maximum a posteriori (MAP) estimate of the true precision matrix, as opposed to a maximum likelihood estimate. That is, our algorithm is a Bayesian method. Moreover, the use of L1 regularization ensures additional desirable properties including consistency -- given enough data, the learning procedure learns the true model, and high statistical efficiency -- the number of samples needed to achieve this guarantee is small.
We now show that the optimization problem defined in Eq. 7 is smooth and convex and can therefore be solved optimally. First, we consider the dual form of the objective. To obtain the dual, we first rewrite the L1-norm as:
where ||U||∞ denotes the maximum absolute value element of the matrix U. Given this change of formulation, the primal form of the optimization problem can be rewritten as:
That is, the optimal Σ-1 is the one that maximizes the worst case log likelihood over all additive perturbations of the covariance matrix.
Next, we exchange the min and max in Eq. 8. The inner max in the resulting function can now be solved analytically by calculating the gradient and setting it to zero. The primal form of the objective can thus be written as:
such that Σ-1 = (S + U*)-1.
After one last change of variables, W = S + U, the dual form of Eq. 7 can now be defined as:
Eq. 9 is smooth and convex, and for small values of n it can be solved by standard convex multivariate optimization techniques, such as the interior point method. For larger values of n we use Block Coordinate Descent [27] instead.
Block Coordinate Descent
Given matrix A, let A\k\jdenote the matrix produced by removing column k and row j of the matrix. Let A j also denote the column j, with diagonal element A jj removed. The Block Coordinate Descent algorithm [27]. Algorithm 1 proceeds by optimizing one row and one column of the variable matrix W at a time. The algorithm iteratively optimizes all columns until a convergence criteria is met. The W s produced in each iterations are strictly positive definite and so the regularized covariance matrix Σ = W is invertible.
Algorithm 1 Block Coordinate Descent
Require: Tolerance parameter ε, sample covariance S, and regularization parameter λ.
Initialize W(0):= S + λI where I is the identity matrix.
repeat
for j = 1,... n do
{//Here, W(j-1)denotes the current iterate.}
Set W(j)to W(j-1)such that W j is replaced by y*.
end for
Set W(0) = W(n)
until trace((W(0))-1S) - n + λ||(W(0))-1||1 ≤ ε.
return W(0)
The time complexity of this algorithm is O(n4.5/ε) [27] when converging to a solution within ε > 0 of the optimal. This complexity is better than , which would have been achieved using the interior point method on the dual form [28].
In summary, the algorithm produces a time-averaged model of the data by computing the sample mean and then constructing the optimal regularized Σ by solving Eq. 9 using Block Coordinate Decent. The regularized covariance matrix Σ is guaranteed to be invertible which means we can always compute the precision matrix, Σ-1, which can be interpreted as a graph over the variables revealing the direct and indirect correlations between the variables.
Algorithm 2
The second algorithm is a straight-forward extension of the first. Instead of producing a time-averaged model, it produced time-varying model: . Here, τ ≤ t indexes over sequentially ordered windows of frames in the trajectory. The width of the window, w, is a parameter and may be adjusted to learn time-varying models at a particular time-scale. Naturally, a separate time-averaged model could be learned for each window. Instead, the second algorithm applies a simple smoothing kernel so that the parameters of the τ th window includes information from neighboring window too. In this way, the algorithm ensures that the parameters of the time-varying model evolve as smoothly as possible, subject to fitting the data.
Let D(τ)⊆ D denote the subset of frames in the MD trajectory that correspond to the τth window, 1 ≤ τ ≤ T. The second algorithm solves the following optimization problem for each 1 ≤ τ ≤ T:
Here, S(τ) is the weighted covariance matrix, and is calculated as follows:
where k indexes over windows τ - κ to τ + κ, κ is a user-specified kernel width, and the weights w k are defined by a nonnegative kernel function. The choice of kernel function is specified by the user. In our experiments the kernel mixed the current window and the previous window with the current window having twice the weight of the previous. The time-varying model is then constructed by solving Eq. 9 for each S(τ). That is, the primary difference between the time-averaged and time-varying version of the algorithm is the kernel function.
Algorithm 3
The final algorithm builds on the second algorithm. Recall that the second algorithm learns T sequentially ordered models over windows of the trajectory. Moreover, recall that each model encodes a multivariate Gaussian (Eq. 1) and that the KL-divergence between multivariate Gaussians can be computed analytically via Eq. 3. The KL-divergence (also known as information gain or relative entropy) is a non-negative measure of the difference between two probability distributions. It is zero if and only if the two distributions are identical. It is not, however, a distance metric because it is not symmetric. That is D(P||Q) ≠ D(Q||P), in general. However, it is common to define a symmetric KL-divergence by simply summing KL sym = D(P||Q)+D(Q||P). We can thus cluster the models using any standard clustering algorithm, such as k-means or a hierarchial approach. In our experiments we used complete linkage clustering, an agglomerative method that minimizes the maximum distance between elements when merging clusters.
Let S be the set of clusters returned by a clustering algorithm. Our final algorithm treats those clusters as states in a Markov Chain. The prior probability of being in each state can be estimated using free energy calculations [29, 30] for each cluster, or according to the relative sizes of each cluster. It then estimates the transition probabilities between states i and j by counting the number of times a model assigned to cluster i is followed by a model assigned to cluster j. This simple approach creates a model that can be used to generate new trajectories by first sampling states from the Markov Chain and then sampling conformations from the models associated with that state.
Experiments
We applied our algorithms to several molecular dynamics simulation trajectories. In this section, we illustrate some of the results obtained through this analysis. The algorithms were implemented in Matlab and run on a dual core T9600 Intel processor running at 2.8 Ghz. The wall-clock runtimes for all the experiments were on the order of seconds to about 10 minutes, depending on the size of the data set and parameter settings.
Algorithm 1: application to the early events of HIV entry
We applied the first algorithm to simulations of a complex (Figure 1-left) consisting of gp120 (a glycoprotein on the surface of the HIV envelope) and the CD4 receptor (a glycoprotein expressed on the surface of T helper cells). The binding of gp120 to CD4 receptors is among the first events involved in HIV's entry into helper T-Cells. We performed two simulations using namd [31]. The first simulation was the gp120-CD4 complex in explicit solvent at 310 degrees Kelvin. The second simulation was the same complex bound to Ibalizumab (Figure 1-right), a humanized monoclonal antibody that binds to CD4 and inhibits the viral entry process [32]. Each trajectory was each 2 ns long and contained 4500 frames.
Ibalizumab's mechanism of action is poorly understood. As can be seen in Figure 1, Ibalizumab does not prevent gp120 from binding to CD4, nor does it directly bind to gp120 itself, suggesting that its inhibitory action occurs via an allosteric mechanism. To investigate this phenomenon, we applied our first algorithm to the two trajectories and then compared the resulting models. The variables in the models corresponded to the positional fluctuations of the C-α atoms, relative to the initial frame of the simulation.
Correlation networks
Figure 2 illustrates the correlation networks learned from the drug-free (left) and drug-bound (right) simulations. The same lambda value (250) was used in each case. In each panel, a black dot indicates that residue i is connected to residue j in the graphical model. The residues corresponding to gp120 and CD4 are labeled on the left-hand side. Edges exist between both spatially proximal and distant residues. For these panels, only the data from the gp120 and CD4 atoms were modeled. However, the effects of the drug are obvious. In the drug-free case the direct correlations are largely intra-molecular, with inter-molecular correlations limited to the binding interface. The drug-bound model, in contrast, exhibits many more inter-molecular edges. Moreover, the drug-bound gp120 has far fewer inter-molecular edges. That is, Ibalizumab not only modulates the interactions between gp120 and CD4, it also changes the internal correlation structure of gp120, despite the fact that the drug only binds to CD4. This is consistent with the hypothesis that Ibalizumab's inhibitory action occurs via an allosteric mechanism.
The probabilistic nature of the model means that it is possible to compute the likelihood of each data set under both models. Table 1 presents the log-likelihoods of both data sets under both models. As expected, the log-likelihood of the unbound data is larger (i.e., more likely) under the unbound model than it is under the bound model, and visa-versa. That is, the models are capturing statistical differences between the simulations.
Figure 3 illustrates the correlation networks learned for all three molecules in the drug-bound simulation. A red box encompasses edges between the drug and the V5 loop of gp120. These particular couplings are interesting because it is known that mutations to the V5 loop can cause resistance to Ibalizumab [33]. Future simulations of such mutants might provide further insights into the mechanism of resistance.
Comparison to sub-optimal models
Our method is guaranteed to return an optimal model. Here we compare the models returned by our algorithm to those obtained by a reasonable, but nevertheless sub-optimal algorithm for generating sparse networks. For comparison, we inverted the sample covariance matrices for each data set. The resulting sample precision matrices were then thresholded so that they had the same number of edges as the ones produced via our method. We find that while the resulting models have similar fits to the data (-0.02 log-likelihood for the unbound trajectory; -0.03 log-likelihood for the bound trajectory), the L1 penalty is is much larger in each case (0.86 vs 15.1 for unbound; 0.75 vs 12.9 for bound). The difference in L1 penalties is due to the radically different choices of edges each method makes. Only 41% (resp. 31%) of the unbound (resp. bound) edges match the ones identified by our algorithm. Moreover, the thresholded sample precision matrices (Figure 4) lack the kind of structure seen in Figure 2. Thus, in addition to producing models that maximize Eq. 6, the resulting models are potentially easier to interpret.
Perturbation analysis
Next, we demonstrate the use of inference to quantify the sensitivity of gp120 to structural perturbations in the drug. We conditioned the model learned from the trajectory with gp120, CD4 and Ibalizumab on the structure of the drug and then performed inference (Eq. 4) to compute the most likely configuration of remaining variables (i.e., those corresponding to gp120 and CD4). This was repeated for each frame in the trajectory. The residues with the highest average displacement are illustrated as red spheres in Figure 5. As expected, the residues that form the binding interface between CD4 and Ibalizumab are sensitive Ibalizumab's motions. Interestingly, a number of gp120 residues are also sensitive, including residues in the vicinity of the V5 loop.
Algorithm 2: application to a 1 microsecond simulation of the engrailed homeodomain
We applied the second algorithm to a simulation of the engrailed homeodomain (Figure 6), a 54-residue DNA binding domain. The DNA-binding domains of the homeotic proteins, called homeodomains (HD), play an important role in the development of all metazoans [34] and certain mutations to HDs are known to cause disease in humans [35]. Homeodomains fold into a highly conserved structure consisting of three alpha-helices wherein the C-terminal helix makes sequence-specific contacts in the major groove of DNA [36]. The Engrailed Homeodomain (En-HD) is an ultra-fast folding protein that is predicted to exhibit significant amounts of helical structure in the denatured state ensemble [37]. Moreover, the experimentally determined unfolding rate is of 1.1E + 03/sec [38], which is also fast. Taken together, these observations suggest that the protein may exhibit substantial conformational fluctuations at equilibrium.
We performed three 50-microsecond simulations of the protein at 300, 330, and 350 degrees Kelvin. These simulations were performed on ANTON[14], a special-purpose supercomputer designed to perform long-timescale simulations. Each simulation had more than 500,000 frames. In this paper, we learned a time-varying model of the first microsecond of the 300 degree trajectory, modeling the fluctuations of the alpha carbons. The window size was 2 ns, and a sawtooth smoothing kernel was applied such that the i th model is built from the data from windows i, i -1, and i - 2 such with kernel weights 0.57, 0.29, and 0.14, respectively. A total of 500 models were learned from the first microsecond of the trajectory.
Figure 7-A plots the differential entropy (Eq. 2) of the 500 models. We see that the curve has a variety of peaks and valleys that can be used to segment the trajectory into putative sub-states. Figures 7-B and 7-C illustrate the correlation networks obtained from the models with the smallest and largest differential entropies, respectively. As can be seen, the simulation visits sub-states that have radically different correlation structures.
Figure 8-A plots the average log-likelihood of the frames from the i + 1st window under the ith model. Sharp drops in the likelihood can also be used to segment the trajectory into possible sub-states and to pin-point the moment when the system transitions between them. Figure 8-B shows the log-likelihood of each of the frames under each of the 500 models. Figure 8-C shows the first 50 rows and the first 2,000 columns of Figure 8-B. The clear block-structure of the matrix more clearly illustrates the sub-states visited by the simulation.
Figure 9-A plots the symmetric version of the KL-divergence (Eq. 3) between sequential models. Once again, spikes in this curve can be used to segment the trajectory.
Algorithm 3: application to a 1 microsecond simulation of the engrailed homeodomain
Using the 500 models learned in the previous section, we computed the symmetric KL-divergence between all pairs of models. Recall that the KL-divergence (Eq. 3) is a measure of the difference between distributions. Figure 9-B plots the pairwise KL divergences between the 500 models.
We then applied complete linkage clustering to the KL-divergence matrix. Complete linkage clustering minimizes the maximum distance between elements when merging clusters. We selected a total of 7 clusters based on the assumption that the number of sub-states visited by a sequence of m models proportional to the logarithm of m. The intuition behind this assumption is that different sub-states are separated by energy barriers and the probability of surmounting an energy barrier is exponentially small in the height of the barrier. Figure 10 shows two representative structures from the two largest clusters. As can be seen, the primary difference between the two structures is the N-terminal loop.
Finally, we estimated the parameters of a Markov chain over the 7 clusters by counting the number of times a model from the i th cluster was followed by a model from the j th cluster. The resulting state-transition matrix is shown in Figure 11. The matrix indicates that state 4 is the dominant state, but inter-converts with states 6 and 7. This state-transition matrix and the graphical models associated with each state encapsulate the statistics of the trajectory.
Discussion
Many existing techniques for analyzing MD data are closely related to, or direct applications of Principal Components Analysis (PCA). Quasi-Harmonic Analysis (QHA) [18, 19], for example, is PCA applied to a mass-weighted covariance matrix of atomic fluctuations. PCA-based methods diagonalize the covariance matrix and thus produce a set of eigenvectors and corresponding eigenvalues. Each eigenvector can be interpreted as one of the principal modes of vibration within the system or, equivalently, as a normally distributed random variable with zero mean and variance proportional to the corresponding eigenvalue. That is, PCA-based methods model the data in terms of a multivariate Gaussian distribution. Our methods also build multivariate Gaussian models of the data but does so over the real-space variables, not the eigen-space variables.
PCA-based methods generally project the data onto a low-dimensional subspace spanned by the eigenvectors corresponding to the largest eigenvalues. This is done to simplify the data and because lower dimensional models tend to be more robust (i.e., less likely to over-fit the data). Our methods, in contrast, uses regularization when estimating the parameters of the model to achieve the same goals.
The eigenvectors produced by PCA-based methods contain useful information about how different regions of the system move in a coordinated fashion. In particular, the components of each vector quantify the degree of coupling between the covariates in that mode. However, the eigenvectors make no distinction between direct and indirect couplings. Moreover, eigenvectors are an inherently global description of dynamics. Our methods, in contrast, do not perform a change of basis and instead models the data in terms of a network of correlations. The resulting model, therefore, reveals which correlations are direct and which are indirect. Pathways in these networks may provide mechanistic insights into important phenomena, such as allosteric regulation. Our models can also be used to investigate motions that are localized to specific regions of the system.
Finally, we note that because our first algorithm produces a regularized estimate of the true covariance matrix, Σ, it could potentially be used as a pre-processing step for PCA-based methods, which normally take as input the sample covariance matrix.
Conclusions and future work
We have introduced three novel methods for analyzing Molecular Dynamics simulation data. Our algorithms learn regularized graphical models of the data which can then be used to: (i) investigate the networks of correlations in the data; (ii) sample novel configurations; or (iii) perform in silico perturbation studies. We note that our methods are complementary to existing analysis techniques, and are not intended to replace them.
There are a number of important areas for future research. Gaussian Graphical Models have a number of limitations, most notably that they encode uni-modal distributions and are best suited to modeling harmonic motions. Boltzmann distributions, in contrast, are usually multi-modal. Our third algorithm partially addresses this problem by creating a Markov chain over GGMs but the motions are still harmonic. Discrete distributions could be used to model anharmonic motions (e.g., by adapting the algorithm in [24]). Gaussian distributions are also best suited to modeling variables defined on the real-line. Angular variables, naturally, are best modeled with circular distributions, like the von Mises. We've recently developed an algorithm for learning multivariate von Mises graphical models [25] which could be used to model distributions over bond and dihedral angles.
Abbreviations
- GGM:
-
Gaussian Graphical Model
- KL:
-
Kullback Leibler
- MAP:
-
maximum a posteriori
- MD:
-
Molecular dynamics
- MRF:
-
Markov Random Field
- MSM:
-
Markov State Model
- PCA:
-
Principal Components Analysis
- QHA:
-
Quasi-Harmonic Analysis.
References
Frauenfelder H, Petsko GA, Tsernoglou D: Temperature-dependent X-ray diffraction as a probe of protein structural dynamics. Nature. 1979, 280 (5723): 558-563. 10.1038/280558a0.
Frauenfelder H, Parak F, Young RD: Conformational substates in proteins. Annu Rev Biophys Biophys Chem. 1988, 17: 451-479. 10.1146/annurev.bb.17.060188.002315.
Henzler-Wildman K, Kern D: Dynamic personalities of proteins. Nature. 2007, 450: 964-972. 10.1038/nature06522.
Boehr DD, Nussinov R, Wright PE: The role of dynamic conformational ensembles in biomolecular recognition. Nat Chem Biol. 2009, 5 (11): 789-796. 10.1038/nchembio.232.
Fraser J, Clarkson M, Degnan S, Erion R, Kern D, Alber T: Hidden alternative structures of proline isomerase essential for catalysis. Nature. 2009, 462 (7273): 669-673. 10.1038/nature08615.
Eisenmesser EZ, Bosco DA, Akke M, Kern D: Enzyme dynamics during catalysis. Science. 2002, 295 (5559): 1520-1523. 10.1126/science.1066176.
Eisenmesser EZ, Millet O, Labeikovsky W, Korzhnev D, M WW, Bosco D, Skalicky J, Kay L, Kern D: Intrinsic dynamics of an enzyme underlies catalysis. Nature. 2005, 438: 117-121. 10.1038/nature04105.
Leitner DM: Energy flow in proteins. Annu Rev Phys Chem. 2008, 59: 233-259. 10.1146/annurev.physchem.59.032607.093606.
Karplus M, McCammon JA: Molecular dynamics simulations of biomolecules. Nat Struct Biol. 2002, 9: 646-652. 10.1038/nsb0902-646.
Philips JC, Braun R, Wang W, Gumbart J, Tajkhorshid E, Villa E, Chipot C, Skeel RD, Kale LV, Schulten K: Scalable molecular dynamics with NAMD. J Comput Chem. 2005, 26 (16): 1781-1802. 10.1002/jcc.20289.
Bowers KJ, Chow E, Xu H, Dror RO, Eastwood MP, Gregersen BA, Klepeis JL, Koloss-vary I, Moraes MA, Sacerdoti FD, Salmon JK, Shan Y, Shaw DE: Scalable algorithms for molecular dynamics simulations on commodity clusters. SC '06: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing. 2006, New York, NY, USA: ACM, 84-96. [http://dx.doi.org/10.1145/1188455.1188544]
Pande VS, Baker I, Chapman J, Elmer SP, Khaliq S, Larson SM, Rhee YM, Shirts MR, Snow C, Sorin EJ, Zagrovic B: Atomistic protein folding simulations on the submillisecond time scale using worldwide distributed computing. Biopolymers. 2003, 68: 91-109. 10.1002/bip.10219.
Stone JE, Phillips JC, Freddolino PL, Hardy DJ, Trabuco LG, Schulten K: Accelerating molecular modeling applications with graphics processors. J Comput Chem. 2007, 28: 2618-2640. 10.1002/jcc.20829.
Shaw DE, Deneroff MM, Dror RO, Kuskin JS, Larson RH, Salmon JK, Young C, Batson B, Bowers KJ, Chao JC, Eastwood MP, Gagliardo J, Grossman JP, Ho CR, Ierardi DJ, Kolossv'ary I, Klepeis JL, Layman T, McLeavey C, Moraes MA, Mueller R, Priest EC, Shan Y, Spengler J, Theobald M, Towles B, Wang SC: Anton, a special-purpose machine for molecular dynamics simulation. SIGARCH Comput. Archit News. 2007, 35: 1-12.
Shao J, Tanner S, Thompson N, Cheatham T: Clustering molecular dynamics trajectories: 1. Characterizing the performance of different clustering algorithms. J Chem Theory Comput. 2007, 3 (6): 2312-2334. 10.1021/ct700119m.
Frickenhaus S, Kannan S, Zacharias M: Efficient evaluation of sampling quality of molecular dynamics simulations by clustering of dihedral torsion angles and Sammon mapping. J Comput Chem. 2009, 30 (3): 479-492. 10.1002/jcc.21076.
Daura X, van Gunsteren WF, Mark AE: Folding-unfolding thermodynamics of a beta-heptapeptide from equilibrium simulations. Proteins. 1999, 34 (3): 269-280. 10.1002/(SICI)1097-0134(19990215)34:3<269::AID-PROT1>3.0.CO;2-3.
Karplus M, Kushick JN: Method for estimating the configurational entropy of macro-molecules. Macromolecules. 1981, 14 (2): 325-332. 10.1021/ma50003a019.
Levy RM, Srinivasan AR, Olson WK, McCammon JA: Quasi-harmonic method for studying very low frequency modes in proteins. Biopolymers. 1984, 23: 1099-1112. 10.1002/bip.360230610.
Berendsen HJ, Hayward S: Collective protein dynamics in relation to function. Curr Opin Struct Biol. 2000, 10 (2): 165-169. 10.1016/S0959-440X(00)00061-0.
Ramanathan A, Agarwal PK, Kurnikova M, Langmead CJ: An online approach for mining collective behaviors from molecular dynamics simulations. J Comput Biol. 2010, 17 (3): 309-324. 10.1089/cmb.2009.0167.
Ramanathan A, Yoo J, Langmead C: On-the-fly identification of conformational sub-states from molecular dynamics simulations. J Chem Theory Comput. 2011, 7 (3): 778-789. 10.1021/ct100531j.
Lange OF, Grubmüller H: Full correlation analysis of conformational protein dynamics. Proteins. 2008, 70 (4): 1294-1312. 10.1002/prot.21618.
Balakrishnan S, Kamisetty H, Carbonell JG, Lee SI, Langmead CJ: Learning generative models for protein fold families. Proteins. 2011, 79 (4): 1061-1078. 10.1002/prot.22934.
Razavian N, Kamisetty H, Langmead C: The von Mises graphical model: regularized structure and parameter learning. Tech Rep CMU-CS-11-108, Carnegie Mellon University, Department of Computer Science. 2011
Bowman GR, Beauchamp KA, Boxer G, Pande VS: Progress and challenges in the automated construction of Markov state models for full protein systems. J Chem Phys. 2009, 131 (12): 124101-10.1063/1.3216567.
Banerjee O, El Ghaoui L, d'Aspremont A: Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J Mach Learn Res. 2008, 9: 485-516.
Vandenberghe L, Boyd S, Wu SP: Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications. 1998, 19: 499-533. 10.1137/S0895479896303430.
Kamisetty H, Xing EP, Langmead CJ: Free energy estimates of all-atom protein structures using generalized belief propagation. J Comput Biol. 2008, 15 (7): 755-766. 10.1089/cmb.2007.0131.
Kamisetty H, Ramanathan A, Bailey-Kellogg C, Langmead C: Accounting for conforma-tional entropy in predicting bidning free energies of protein-protein interactions. Proteins. 2011, 79 (2): 444-462. 10.1002/prot.22894.
Phillips JC, Braun R, Wang W, Gumbart J, Tajkhorshid E, Villa E, Chipot C, Skeel RD, Kalé L, Schulten K: Scalable molecular dynamics with NAMD. J Comput Chem. 2005, 26: 1781-1802. 10.1002/jcc.20289.
Jacobson JM, Kuritzkes DR, Godofsky E, DeJesus E, Larson JA, Weinheimer SP, Lewis ST: Safety, pharmacokinetics, and antiretroviral activity of multiple doses of ibalizumab (formerly TNX-355), an anti-CD4 monoclonal antibody, in human immunodeficiency virus type 1-infected adults. Antimicrob Agents Chemother. 2009, 53 (2): 450-457. 10.1128/AAC.00942-08.
Toma J, Weinheimer SP, Stawiski E, Whitcomb JM, Lewis ST, Petropoulos CJ, Huang W: Loss of asparagine-linked glycosylation sites in variable region 5 of human immunodeficiency virus type 1 envelope is associated with resistance to CD4 antibody ibalizumab. J Virol. 2011, 85 (8): 3872-3880. 10.1128/JVI.02237-10.
Gehring W, Affolter M, Burglin T: Homeodomain proteins. Annu Rev Biochem. 1994, 63: 487-526. 10.1146/annurev.bi.63.070194.002415.
D'Elia AV, Tell G, Paron I, Pellizzari L, Lonigro R, Damante G: Missense mutations of human homeoboxes: a review. Hum Mutat. 2001, 18: 361-374. 10.1002/humu.1207.
Gehring W, Qian Y, Billeter M, Furukubotokunaga K, Schier A, Resendezperez D, Affolter M, Otting G, Wuthrich K: Homeodomain-DNA recognition. Cell. 1994, 78: 211-223. 10.1016/0092-8674(94)90292-5.
Mayor U, Grossmann JG, Foster NW, Freund SM, Fersht AR: The denatured state of engrailed homeodomain under denaturing and native conditions. J Mol Biol. 2003, 333: 977-991. 10.1016/j.jmb.2003.08.062.
Mayor U, Johnson CM, Dagget V, Fersht AR: Protein folding and unfolding in microseconds to nanoseconds by experiment and simulation. Proc Natl Acad Sci U S A. 2000, 97: 13518-13522. 10.1073/pnas.250473497.
Acknowledgements
This work is supported in part by US NSF grant IIS-0905193. Use of the Anton machine was provided through an allocation from National Resource for Biomedical Supercomputing at the Pittsburgh Supercomputing Center via US NIH RC2GM093307.
This article has been published as part of BMC Genomics Volume 13 Supplement 1, 2012: Selected articles from the Tenth Asia Pacific Bioinformatics Conference (APBC 2012). The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2164/13?issue=S1.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
All three authors contributed to the creation and implementation of the algorithms and writing the manuscript. N.S.R. and C.J.L. performed the experiments and analysis.
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Razavian, N.S., Kamisetty, H. & Langmead, C.J. Learning generative models of molecular dynamics. BMC Genomics 13 (Suppl 1), S5 (2012). https://doi.org/10.1186/1471-2164-13-S1-S5
Published:
DOI: https://doi.org/10.1186/1471-2164-13-S1-S5