Identifying potential association on gene-disease network via dual hypergraph regularized least squares

Background Identifying potential associations between genes and diseases via biomedical experiments must be the time-consuming and expensive research works. The computational technologies based on machine learning models have been widely utilized to explore genetic information related to complex diseases. Importantly, the gene-disease association detection can be defined as the link prediction problem in bipartite network. However, many existing methods do not utilize multiple sources of biological information; Additionally, they do not extract higher-order relationships among genes and diseases. Results In this study, we propose a novel method called Dual Hypergraph Regularized Least Squares (DHRLS) with Centered Kernel Alignment-based Multiple Kernel Learning (CKA-MKL), in order to detect all potential gene-disease associations. First, we construct multiple kernels based on various biological data sources in gene and disease spaces respectively. After that, we use CAK-MKL to obtain the optimal kernels in the two spaces respectively. To specific, hypergraph can be employed to establish higher-order relationships. Finally, our DHRLS model is solved by the Alternating Least squares algorithm (ALSA), for predicting gene-disease associations. Conclusion Comparing with many outstanding prediction tools, DHRLS achieves best performance on gene-disease associations network under two types of cross validation. To verify robustness, our proposed approach has excellent prediction performance on six real-world networks. Our research work can effectively discover potential disease-associated genes and provide guidance for the follow-up verification methods of complex diseases.


Background
Identification of the association between disease and human gene has attracted more attention in the field of biomedicine, and has become an important research topic. A great deal of evidence shows that understanding genes related to diseases is of great help to prevent *Correspondence: wuxi_dyj@163.com; guofeieileen@163.com † Hongpeng Yang and Jijun Tang contributed equally to this work. 2 Yangtze Delta Region Institute, University of Electronic Science and Technology of China, Quzhou, China 4 School of Computer Science and Engineering, Central South University, Changsha, China Full list of author information is available at the end of the article and treat diseases. However, identifying the relationship between disease and gene by biological experiments has to spend a long time and cost. Many computational models have been proposed to solve some similar biologically related problems. For example, in the fields of biology [1][2][3], pharmacy [4], and medicine [5,6], machine learning methods help solve many analytical tasks. In order to explore the relationship between gene and disease, a variety of algorithms have been proposed for association prediction. The typical machine learning methods [7][8][9][10] is to extract relevant features of known genetic data of each disease and train the model to deter-mine which disease is related to those genes, so these algorithms are usually single-task algorithms for each disease. This model needs to be trained separately. Therefore, for a new disease or an existing disease with few known genes, due to the lack of known association data or the relevant information between various diseases, it is difficult to train the learning model. As a machine learning method, the matrix completion methods [11][12][13] can solve the above problem by calculating the similarity information and predicting the association between disease and gene, but the matrix completion method usually takes a long time to converge the local optimal solution. The other type is network-based model [14][15][16][17]. Li et al. [17] predicted the association by systematically embedding a heterogeneous network of genes and diseases into Graph Convolutional Network. This model usually divides genes and diseases into two heterogeneous networks. The edges in network represent the similarity between nodes. The model is based on the assumption that genes with high similarity are easily related to similar diseases. However, they are biased by the network topology, and it is necessary to rely on effective similarity information. It is not easy for these methods to integrate related sources of multiple genes and diseases.
Multiple Kernel Learning (MKL) is an important machine learning method, which can effectively combine multi-source information to improve the model effect, and is applied to many biological problems. For instance, Yu et al. [8] implemented one-class of Support Vector Machine while optimizing the linear combination of the gene nucleus and the MKL method. Ding et al. [18][19][20][21] proposed multiple information fusion models to identify drug-target and drug-side effect associations. Wang et al. [22] proposed a novel Multiple Kernel Support Vector Machine (MKSVM) classifier based on Hilbert Schmidt Independence Criterion to identify membrane proteins. Shen [23] and Ding et al. [24] proposed a MKSVM model to identify multi-label protein subcellular localization. Ding et al also employ fuzzy-besd model to predict DNAbinding proteins [25] and protein crystallization [26]. Zhang et al. [27] developed an ensemble predictive model of classifier chain to identify anti-inflammatory peptides.
LapRLS framework [28] is often used in various fields based on machine learning model, such as the prediction of Human Microbe-Disease Association [29] and the detection of human microRNA-disease association [30]. At the same time, Hypergraph learning [31][32][33] is becoming popular. Hypergraphs can represent more complex relationships among various objects. Bai et al. [34] introduced two end-to-end trainable operators to the family of graph neural networks, i.e., hypergraph convolution and hypergraph attention. Whilst hypergraph convolution defines the basic formulation of performing convolution on a hypergraph, hypergraph attention further enhances the capacity of representation learning by leveraging an attention module. Zhang et al. [35] developed a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. Ding et al. [36] predicted miRNAs-disease associations by a hypergraph regularized bipartite local model, which is based on hypergraph embedded Laplacian support vector machine.
Inspired by what is mentioned above, we propose a novel prediction method named Dual Graph Hypergraph Least Squares model (DHRLS) to predict gene-disease associations. Some computational models based on graph learning can effectively solve various network problems. In this paper, the gene-disease association detection can be defined as the link prediction problem in bipartite network [37][38][39]. Furthermore, two feature spaces are described by similarity information of multiple genes and diseases. Multiple kernel learning is also used to combine multiple informations linearly. Here, we use the Centered Kernel Alignment-based Multiple Kernel Learning (CKA-MKL) [40] to obtain weights of multiple kernels and then combine these kernels via optimal weights in two spaces, respectively. In addition, we also embed hypergraphs in graph regular terms to preserve high-order information of genes and diseases, using more complex information to improve prediction performance. To prove the effectiveness of our proposed method, six types of real networks and one gene-disease associations network are employed to test our predictive model. On the gene-disease associations dataset, our method has been compared with some methods under two types of cross-validation (CV). Comparing DHRLS with other state-of-the-art methods on predicting gene-disease associations, including CMF, GRMF and Spa-LapRLS, our model achieves the highest AUC and AUPR in 10-fold cross validation under CV1, but our model achieves lower AUC under CV2 compared with Spa-LapRLS. At the same time, DHRLS has excellent prediction performance on six benchmark datasets.

Results
In order to better test the performance of our method, our proposed approach is verified on real gene-disease associations dataset under two types of cross validation. We also test the capability of DHRLS in predicting novel disease after confirming the excellent performance of our method based on cross validation. Furthermore, we employ benchmark datasets to evaluate our approach and compare it with other existing methods.

Dataset
We download the dataset of gene-disease associations from [41] (http://cssb2.biology.gatech.edu/knowgene). Since the number of genes is too large and the information

Evaluation measurements
The 10-fold Cross Validation (CV) is usually used to verify the bipartite network detection. In order to compare the prediction performance with other methods under the same evaluation measurement, we will also use 10fold CV for verification. At the same time, Area under the receiver operating characteristic curve (AUC) and Area Under the Precision-Recall curve (AUPR) as the major evaluation indicator, will also be applied to evaluate methods. There are two CV settings as follows: CV1: Pair prediction. All gene-disease associations are randomly divided into test set and training set, and the associations in the test set are removed. Fig. 2 The AUC (a) and AUPR (b) of models with different λ d and λ g under CV1. λ d (horizontal axis) and λ g (vertical axis) are set from 2 −5 to 2 5 with step 2 1 . The yellow color is the higher value, and blue color is the lower value

Parameter settings
In our study, DHRLS has some parameters λ d , λ g , β, k and number of iterations. In the parameter selection, we consider all combinations of following values: number of k-Nearest Neighbor is from 10 to 100 (with step 10); number of iterations is {1,2,...,15}; 2 −5 , ..., 2 0 , ..., 2 5 for λ d and λ g ; β = 1. Figure 1 shows the results of our model obtained under different iteration times and k values. For the number of k-Nearest Neighbor, we select the optimal k under the highest AUPR value and can clearly find that AURP reaches its peak when k = 50. For the number of iterations, it basically converges at the four times. In order to train the model more fully, we finally choose the number of iterations to be 10. Figure 2 shows the results of AUC and AUPR in grid search for parameters λ d and λ g . The optimal λ d and λ s are also selected under highest AUPR value. In this study, the optimal parameters of Hypergraph Laplace regular terms are obtained on λ d = 1 and λ g = 0.25. Under this parameter selection, the AUC value is relatively high.

Evaluation on gene-disease association data Performance analysis
We evaluate the different performance of CKA-MKL, mean weighted-based MKL and single kernel The testing results are shown in Table 1 and Fig. 3.
Obviously, the model of CKA-MKL on DHRLS obtains the best performance with AUC of 0.9742 and AUPR of 0.8092. Comparing with mean weighted on DHRLS, AUPR and AUC are increased by 0.0086 and 0.0039. This means that CKA combines multi-kernel information more effectively than simple average combination.
obtains lower performance than the model with GIP kernel. Therefore, GIP is an effective method to calculate the kernel matrix. By comparing the results of single kernel and multi-kernel models, combining multiple information is an effective method to improve the prediction effect of the model. Furthermore, Fig. 4 shows the weights of each kernel matrix in the gene space and disease space. The weight of the kernel indicates the degree of contribution of the corresponding kernel matrix. Comparing the weights in the gene and disease spaces, the GIP kernel has a higher weight in both spaces, which is consistent with the results in Table 1. In gene space, except for GIP kernel, the kernel weight of K g GO is higher than K g PPI and K g SW . This means that K g GO 's contribution to the overall is better than the other two kernel matrices.

Comparison to existing predictors
Many excellent methods have been proposed to predict the bipartite network link, including Spa-LapRLS [30], GRMF [42] and CMF [43]. Our method is compared to the existing methods and DGRLS under CV1 and CV2, respectively. Under CV1, the results are shown in Table 2 and Fig. 5. Our method achieves the best AUC (0.9742) and AUPR (0.8092). For AUC, DHRLS is not much different from DGRLS and Spa-LapRLS, which is about 0.01 higher than GRMF and CMF. As for AUPR, DHRLS achieves better performance than other methods. Comparing the results of DHRLS and DGRLS, it can be seen that the hypergraph-based model is better than the normal graph model, which shows that the high-level graph information constructed by the hypergraph is helpful for the predict performance. This is related to the ability of hypergraph to effectively find similar information between nodes. At the same time, the methods based on LapRLS (DHRLS, DGRLS and Spa-LapRLS) are higher than those based on matrix factorization (GRMF and CMF), indicating that the model framework of LapRLS has more advantages in the prediction of gene-disease associations.
In order to test the performance of our method detecting new diseases, the associations for new diseases (CV2) are not observed in the training set. Table 3 and Fig. 6 show the results of CV2. Under CV2, our method obtains best AUPR (0.1413). However, the performance of our model on AUC (0.8987) is secondary best, which is about 0.02 lower than that of Spa-LapRLS. Comparing the results of DGRLS and DHRLS under CV1 and CV2, we clearly find that utilizing hypergraph to establish higher-order relationships greatly improves the predictive ability of the model.

Case study
Our model can predict genes associated with new diseases. Here, we use DHRLS to rank the predicted values of genes related to new diseases in descending order. The higher the ranking, the more likely it is to interact. We set the value of a disease in the correlation matrix to 0 as a new disease. One example is Lung Diseases. We intercepted the top 50 predicted genes and 40 (80%) known related genes in the predicted results. All predicted ranking results are shown in Table 4.

Evaluation on six benchmark datasets
To test the performance of our proposed method, we consider six real-world networks: (i) G-protein coupled receptors (GPC Receptors): the biological network of drugs binding GPC receptors; (ii)Ion channels: the biological network of drugs binding ion channel proteins; (iii) Enzymes: the biological network of drugs binding enzyme proteins; (iv) Southern Women (referred here as "SW"): the social relations network of women and events; (v) Drug-target: the chemical network of drugtarget interaction; (vi) Country-organization (referred here as "CO"): the network of organization most related to the country. Detailed information about six datasets is described in Table 5.
Since there is only the data of interaction matrix of binary network, in order not to introduce additional data, we directly use the GIP kernel extracted from the interaction matrix as the kernel matrix for each real-world network. The kernel is defined as follows: where Y is the train set of binary network, and Y i is the vector of associations. We test our method on above six datasets and compare results with other methods [44]. Wang et al. [44] proposed a framework, called Similarity Regularized Nonnegative Matrix Factorization (SRNMF), for link prediction in bipartite networks by combining the similarity based structure and the latent feature model from a new perspective. Tables 6 and 7 show the comparison of precision and AUC for six real-world networks. DHRLS performs better than other methods on Enzymes and Ionchannel networks, and values of our precision and AUC are higher than others. For GPC and Drugtarget networks, the precision is same, but AUC is slightly higher. This directly indicates the clear performance advantage of our approach in real-world binary networks.

Discussion
We developed the model DHRLS for the gene-disease association prediction. In order to evaluate our model, we test not only on real gene-disease associations dataset, but also on some benchmark datasets. By comparing the results of single-kernel model and multi-kernel model, MKL can effectively combine multi-kernel information to improve the predictive ability of the model. By adjusting different kernel weights, different kernel matrices can express different levels of information. However, MKL needs to be applied to samples with multiple feature information, and the application effect is not obvious for problems with fewer features. The comparison of DHRLS and DGRLS can illustrate the effectiveness of hypergraph. After adding the hypergraph, the result of the model is obviously improved, which is caused by the characteristics of the hypergraph. Hypergraph uses high-order information between nodes, that is, a hyperedge can connect more than two nodes, which can better indicate the degree of similarity between nodes. Comparing DHRLS with other state-of-the-art methods on predicting gene-disease associations, including CMF, GRMF and Spa-LapRLS, our model achieves the highest AUC and AUPR in 10-fold cross validation under CV1, but our model achieves lower AUC under CV2 compared with Spa-LapRLS. At the same time, DHRLS has excellent prediction performance on six benchmark datasets. Nevertheless, our model still has some flaws.First of all, the model contains a large number of matrix operations and optimization problems, and lacks a certain degree of simplicity. Secondly, we need to calculate the multi-kernel information of the sample. Therefore, we cannot achieve predictions for samples without features. At present, most of the computational methods are developed to predict the associations of gene-disease, and there is still a great possibility to improve the prediction performance. For example, hypergraph can be considered in the graph based method. In the future, for optimizing the model and improving the prediction performance, we can add some data preprocessing and calculate simplification on the basis of DHRLS, as well as better method to build hypergraph.

Conclusion
In summary, we propose a Dual Hypergraph Regularized Least Squares (DHRLS) based on CKA-MKL algorithm, for the gene-disease association prediction. We use multiple kernels to describe gene and disease spaces. The weights of these kernels are obtained by CKA-MKL and used to combine kernels. We use hypergraph to describe more complex information to improve our prediction. Our purpose is to establish an accurate and effective prediction model of gene-disease association based on the existing data of gene-disease associations, and provide guidance for the follow-up verification methods of complex diseases.

Methods
In this study, we first use two disease kernels and four gene kernels to reveal potential associations of genes and diseases. Then, the MKL method CKA is used to combine |V|, |W| denote the number of two types of nodes respectively; |E| is the number of edges; LD, AD, LAD, and RAD are the link density, the average degree, the left average degree, the right average degree. above kernels into one disease kernel and one gene kernel. Finally, we use Dual Hypergraph Regularized Least Squares to identify gene-disease associations. Figure 7 show the flowchart of our method DHRLS.

Problem definition
The prediction of gene-disease associations can be regarded as a recommendation system. Given n diseases D = {d 1 , d 2 , , ..., d n }, m genes S = g 1 , g 2 , , ..., g m and gene-disease associations. The association between gene and disease items can be expressed as an adjacent matrix Y ∈ R n×m . The element of adjacent matrix Y is the relationship between genes and diseases. If disease d j (1 ≤ j ≤ m) is associated with gene g i (1 ≤ i ≤ n), the value of Y i,j is set as 1, otherwise it is 0. Genes, diseases, and their associations are formulated as a bipartite network.

Related work
LapRLS framework [28] is often used in various fields based on machine learning model, such as the prediction of Human Microbe-Disease Association [29] and the detection of human microRNA-disease association [30]. At the same time, Hypergraph learning [31][32][33] is becoming popular. Hypergraphs can represent more complex relationships among various objects. The Laplacian Regularized Least Squares (LapRLS) model [45] based on graph regularization is employed to predict potential associations in a bipartite network. The functions of model can be defined as follows: T , Y train ∈ R n×m . K a ∈ R n×n and K b ∈ R m×m are kernels in two feature space, separately.
L a ∈ R n×n and L b ∈ R m×m are the normalized Laplacian matrices as follows: where D a and D b are diagonal matrices, D a (k, k) = n l=1 K a (k, l), D b (k, k) = m l=1 K b (k, l) The variables α α α a and α α α * b of LapRLS can be solved as follows: And F * a and F * b can be calculated as follows: The predictions from two feature spaces are combined into:

Feature extraction
To improve effectiveness of detecting gene-disease associations, We use two and four types of similarity for disease and gene separately. In our work, we constructed the multiple kernels of diseases and genes to represent the feature sets. Table 8 summarizes whole kernels, including two feature spaces.

Disease space
We calculate two classes of disease kernels, including semantic similarity kernel and Gaussian Interaction Profile (GIP) kernel (for disease).

a) Semantic similarity
The disease semantic similarity kernel is calculated by the relative positions in the MeSH [46] disease. Directed Acyclic Graph (DAG) [47] can describe disease d i as a node. A disease d i can be described as a node in DAG and denoted as where T d i is the set of all ancestor nodes of d i including node d i itself and E d i is the set of corresponding links. A semantic score of each disease t ∈ T d i can be calculated as follows:

Gaussian interaction profile for gene
where is the semantic contribution factor, which is set to 0.5 in this paper. Then, the semantic score of disease d i can be calculated as follows: So, the disease semantic similarity kernel K d SEM ∈ R n×n is calculated as follows: b) GIP kernel similarity The similarity between diseases can also be calculated by GIP. Given two diseases d i and d j (i, j = 1, 2, ..., n), the GIP kernel can be calculated as follows: where Y d i and Y d j are the information of associations for vector disease d i and d j . γ d (set as 0.5) is the bandwidth of GIP kernel.

Gene space
Four types of gene kernels, including Gene Ontology (GO) [48] similarity, Protein-protein interactions (PPIs) network similarity, sequence similarity kernel and GIP kernel (for gene) are utilized to represent the relationship between genes.
a) GO similarity The information of GO is obtained through DAVID [49]. GO similarity K g GO ∈ R m×m is the overlap of GO annotations on two genes, and we simply use GOSemSim [50] to get it. We consider one option of GO: cellular component (CC) to represent gene functional annotation.

b) PPIs similarity
We download the protein-protein interactions network from previous research [41] and select the sub-networks related to our genes. Give the topological feature vectors p i and p j of two genes in the PPIs network. The Cosine similarity of PPIs network can be calculated as follows: c) Sequence similarity We use the normalized Smith Waterman (SW) score [51] to measure the sequence similarity between the two gene sequences, which is calculated as follows: where SW (., .) is Smith Waterman score. S g i is the information of sequence for gene g i . d) GIP kernel similarity GIP is also employed to build gene GIP kernel K g GIP . Given two genes g i and g j (i, j = 1, 2, ..., m), the GIP kernel can be calculated as follows: where Y g i and Y g j are the information of associations for vector gene g i and g j . γ g (set as 0.5) is the bandwidth of GIP kernel.

Multiple kernel learning
In our work, two kernels in the disease space including K d SEM and K d GIP , and four kernels of gene space including K g SW , K g GO , K g PPI and K d GIP . We then need to combine these kernels by means of linear combination in order to achieve the optimal ones.
where k is the number of kernels and ω i is the weight of the kernel K i . N is the number of samples in kernel K i .
The method CKA-MKL is utilized to combine gene kernels and disease kernels, respectively. The cosine similarity between K 1 and K 2 is defined as follows: where K 1 , K 2 ∈ R n×n , < K 1 , K 2 > F = Trace K 1 T K 2 is the Frobenius inner product and ||K 1 || F = √ < K 1 , K 1 > F is Frobenius norm.
The higher the cosine value, the greater the similarity between the kernels. CKA is based on the assumption that the combined kernel (feature space) should be similar to the ideal kernel (label space). Therefore, the alignment score between the combined kernel and the ideal kernel should be maximized. The objective function of centered kernel alignment is as follows: N denotes a centering matrix, and I N ∈ R N×N is the N-order identity matrix, l N is the N-order vector with all entries equal to one. K c ω is the centered kernel matrix associated with K ω . Equation 16 can be written as follow: where a = <K c and M denotes the matrix defined by M ij =< K c i , K c j > F , for i, j = 1, ..., k. We can obtain the weight (ω) by solving this simple quadratic programming problem.
CKA-MKL estimates the weights of g GIP ∈ R m×m kernels, separately. k d and k g are the number of kernels in disease space and gene space. In order to obtain the optimal kernel matrix K * d and K * g in the two spaces, first calculate the weights of kernel matrices in each space by Eq. 17, and then combine them by Eq. 14. Here, K d ideal = Y train Y T train ∈ R n×n in the disease space; and K g ideal = Y T train Y train ∈ R m×m in the gene space.

Hypergraph learning
In graph theory, a graph represents the pairwise relationship between a group of objects. In traditional graph structures, vertices represent objects, and edges represent relationships between two objects. However, traditional graph structures cannot express more complex relationships. For example, they cannot express more than three relationships in pairs. Hypergraph [31] solves this problem well. In hypergraph theory, this kind of multi-object relationship is represented by using a subset of vertex sets as super edges. In this study, we use hypergraph to establish this higher-order relationship. In Fig. 8 (left), {v 1 , v 2 , ..., , v 7 } represents the vertex set, and {v 2 , v 4 , v 6 } are contained in hyperedge e 1 . Each hyperedge may comprise two or more vertices. The hyperedge will degenerate into a normal edge, when there are only two vertices in the hyperedge.
The construction of hypergraph is similar to that of ordinary graph. Hypergraph also needs a vertex set V, an hyperedge set E and the weight of hyperedge w w w ∈ R N e ×1 . Here, each hyperedge e i (i = 1, 2, ..., N e ) is given a weight w(e i ). The difference is that the hyperedge set of a hypergraph is actually a set of vertices. Therefore, a hypergraph can be represented by G = (V, E, w w w).
For the hypergraph G, the incidence matrix H conveys the affinity between vertices and hyperedges. And, each element of H can be given by the following formula: The matrix H describes the relationship between vertices and is shown in Fig. 2 (right). Specifically, H i,j = 1 means the vertex v i is included in the hyperedge e j . On the contrary, H i,j = 0 means that the vertex v i is not in the hyperedge e j In a hypergraph G. The degree of each vertex and hyperedge and the weight of hpyperdege are expressed as follows: where K * is the combined kernel.
The hypergraph is constructed using the k Nearest Neighbor (kNN) algorithm. Specifically, each vertex as the center point, and find the k vertices with the largest similarity according to the kernel matrix to form a hyperedge. Assuming that there are N samples, we can construct N hyperedges. In this study, we define the weight of each hyperedge is the sum of kernel values of the k vertices closest to center point, and finally the weight is normalized.
The hypergraph Laplacian matrix L h [31] is defined as follows: where I is the identity matrix. Consequently, we can obtain the hypergraph Laplacian matrix L h d and L h g about the disease and gene spaces, respectively.

Dual hypergraph regularized least squares
Baesd on LapRLS method, we propose a novel model to predict the associations of genes and diseases, named Dual Hypergraph Regularized Least Squares (DHRLS), through incorporation of the multiple informations of gene and disease feature spaces into the dual hypergraph regularized least squares framework. The objective function can be written as follow: where F * d = K * d α α α d and F * g = K * g α α α g . The F * could be calculated by F * = F * d + (F * g ) T /2. F * is an average combination of gene and disease space evaluation as the final prediction result.
Then to avoid overfitting of α α α d and α α α g to training data, we apply L2 (Tikhonov) regularization to Eq. 22 by adding two terms regarding α α α d and α α α g .
where β is a regularization coefficient. Since previous studies [52] have shown that graph regularization terms are beneficial to improve the prediction effect of the model, graph regularization terms related to genes and diseases are added to the model. According to the local invariance assumption [53], if two data points are close in the intrinsic geometry of the data distribution, then the representations of these two points with respect to the new basis, are also close to each other. This assumption plays an essential role in the development of various kinds of algorithms. In our model, we minimize the distance between the potential feature vectors of two adjacent diseases and genes respectively = tr α α α T g K * g L g K * g α α α g (24) where F * d i is the i-th row vector of F * d = K * d α α α d ∈ R n×m , i, r = 1, 2, ..., n. Similarly, F * g j is the j-th row vector of Table 9 The algorithm of our proposed method Algorithm : The algorithm of our proposed method Input: Known associations Y train ∈ R n×m , disease space kernels ( K d SEM , K d GIP ∈ R n×n ) and gene space kernels ( K g GO , K g PPI , K g SW , K g GIP ∈ R m×m ), parameters λ d , λ g , β and k-Nearest Neighbor for DHRLS; Output: Predicted associations F * ∈ R n×m ; 1.Calculating disease and gene kernels, listed in Table 8; 2.Calculating disease kernel weights w d and gene kernel weights w g by Eq. 17 (CKA-MKL), respectively; 3.Calculating K * d and K * g by Eq. 14, respectively; 4.Calculating L h d and L h g by Eq. 21, respectively; 5.Solving Eqs. 27 and 28 (ALSA), and estimating F * by Eq. 29; F * g = K * g α α α g ∈ R m×n , j, q = 1, 2, ..., m. F * d i and F * g j mean the representations of the new base. K * d (i, r) and K * g (j, q) are the weights of two points in two spaces respectively. After adding the graph regular term, the objective function is redefined as follows: where λ d and λ g are the coefficients of graph regular terms. We take formula 25 as a model, called Dual Graph Regularized Least Squares (DGRLS). In order to be able to express the high-order relationship between nodes, while improving the prediction effect, Hypergraph Laplacian matrix is applied to our final model DHRLS. Thus, the final objective function can be described as follows: where L h is the hypergraph laplacian matrix, it can be calculated by Eq. 21.

Objective function optimization for DHRLS
We select alternating least squares to estimate α α α d and α α α g , and then run alternatingly until convergence.