A Provable Smoothing Approach for High Dimensional Generalized Regression with Applications in Genomics

In many applications, linear models fit the data poorly. This article studies an appealing alternative, the generalized regression model. This model only assumes that there exists an unknown monotonically increasing link function connecting the response $Y$ to a single index $X^T\beta^*$ of explanatory variables $X\in\mathbb{R}^d$. The generalized regression model is flexible and covers many widely used statistical models. It fits the data generating mechanisms well in many real problems, which makes it useful in a variety of applications where regression models are regularly employed. In low dimensions, rank-based M-estimators are recommended to deal with the generalized regression model, giving root-$n$ consistent estimators of $\beta^*$. Applications of these estimators to high dimensional data, however, are questionable. This article studies, both theoretically and practically, a simple yet powerful smoothing approach to handle the high dimensional generalized regression model. Theoretically, a family of smoothing functions is provided, and the amount of smoothing necessary for efficient inference is carefully calculated. Practically, our study is motivated by an important and challenging scientific problem: decoding gene regulation by predicting transcription factors that bind to cis-regulatory elements. Applying our proposed method to this problem shows substantial improvement over the state-of-the-art alternative in real data.


Introduction
Regression models play a fundamental role in characterizing the relation among variables. Nonparametric and semiparametric regression models are commonly used alternatives to linear regression when the latter fails to fit the data well. Their advantages over simple linear regression models have been established in various fields [1,2].
In this article, we study a semiparametric generalization of linear regression as such an alternative. We assume (1.1) Here Y is the scalar response variable, D(·) is an unknown increasing function, Λ(·, ·) is an unknown strictly increasing function regarding each of its arguments, X represents the vector of explanatory variables, is an unspecified noise term independent of X, and β * is the regression coefficient characterizing the relation between X and Y . The coefficient β * is assumed to be sparse. Model (1.1) is referred to as the generalized regression model. It was first proposed in econometrics [3], and is a very flexible semiparametric model, containing a parametric part, encoded in the linear term X T β * , and a nonparametric part, encoded in link functions D(·), Λ(·, ·), and the noise term . In practice, we assume that n independent realizations of (Y, X), denoted as {(Y i , X i ), i = 1, . . . , n}, are observed. These observations will be used to fit the model.

A motivating genomic challenge
Model (1.1) naturally occurs in many applications. Below we elaborate a challenging scientific problem that motivates our study. To help put the model into context, some background introduction is necessary. One fundamental question in biology is how genes' transcriptional activities are controlled by transcription factors (TFs). TFs are an important class of proteins that can bind to DNA to induce or repress the transcription of genes nearby the binding sites (Figure 1(a)). Human has hundreds of different TFs. Each TF can bind to 10 2 -10 5 different genomic loci to control the expression of 10-10 3 target genes. The genomic sites bound by TFs are also called cis-regulatory elements or cis-elements. Promoters and enhancers are typical examples of cis-elements. TF binding activities are context-dependent. The binding activity of a given TF at a given cis-element varies from cell type to cell type. It depends on the TF's expression level in each cell type as well as numerous other factors. One cis-element can be bound by multiple collaborating TFs that form a protein complex. Different TFs can bind to different cis-elements. In order to comprehensively understand the gene regulatory program, a crucial step is to identify all active cis-elements and their binding TFs in each cell type and biological condition. The state-of-the-art technology for mapping genome-wide TF binding sites (TFBSs) is Chromatin Immunoprecipitation coupled with sequencing (ChIPseq) [4]. Unfortunately, each ChIP-seq experiment can only analyze one TF. Using this technology to map binding sites of all human TFs would require one to conduct hundreds of such experiments. Moreover, ChIP-seq requires highquality antibodies which are not available for all TFs. Therefore, mapping binding sites for all TFs using ChIP-seq is both costly and technically infeasible. An alternative approach to mapping TFBSs is based on genome-wide sequencing of DNase I hypersensitive sites (DNase-seq) [5]. TFBSs are often sensitive to the cleavage of DNase I enzyme. Thus, DNase I hypersensitivity (DH), which can be measured in a genome-wide fashion using DNase-seq, can be used to locate cis-elements actively bound by TFs (Figure 1(a)). This approach is capable of mapping binding sites of all TFs in a biological sample through a single experimental assay. However, a major limitation of DNase-seq is that it does not reveal the identities of TFs that bind to each cis-element. If one could solve this problem by correctly predicting which TFs bind to each element, one would then be able to combine DNase-seq with computational predictions to identify all active cis-elements and their binding TFs in a biological sample using a single experimental assay. This would help scientists to remove a major roadblock in the study of gene regulation.
Many TFs recognize specific DNA sequence patterns called motifs ( Figure  1(a)). Different TFs recognize different motifs. A conventional way to infer the identities of TFs that bind to a cis-element is to examine which TFs' DNA motifs are present in the cis-element. Unfortunately, motifs for 2/3 of all human TFs are unknown. Therefore, solely relying on DNA motifs is not sufficient to solve this problem. This motivates development of an alternative solution that leverages massive amounts of gene expression and DNase-seq data in public databases to circumvent the requirement for DNA motifs. The Encyclopedia of DNA Elements (ENCODE) project [6] has generated DNase-seq and gene expression data for a variety of different cell types. Using these data, one may examine how the protein-binding activity measured by DNase-seq at a cis-element varies across different cell types and how such variation is explained by variations in the expression of TFs (Figure 1(b)). Through this analysis, one may infer which TFs bind to each cis-element.
Formally, let Y be the activity of a cis-element in a particular cell type measured by DNase-seq, and let X = (X 1 , . . . , X d ) T be the expression level of d different TFs in the same cell type (Figure 1(b)). The relationship between cis-element activity and TF expression can be described using a generalized regression model, Y = D•Λ(X T β * , ), with a high dimensional sparse regression coefficient vector β * . One expects the relationship between Y and X T β * to be monotonic since a TF has to be expressed in order to be able to bind to cis-elements. Also, increased TF expression may lead to increased binding. The relationship may not be linear and the noise may not be normal since DNase-seq generates counts data. Although after normalization, the data may no longer be integers, they usually are still non-normal and may be zero-inflated. The model is high dimensional since there are hundreds of of TFs (i.e., d = 10 2 − 10 3 ), whereas the sample size (i.e., the number of ENCODE cell types with both DNase-seq and gene expression data) is small (n=50-100). Lastly, β * has to be sparse since the number of TFs that can bind to a cis-element is expected to be small. This is because cis-elements are typically short. Each element cannot have physical contacts with too many different proteins. In this model, the non-zero components of β * may be used to infer the identities of binding TFs.

Existing works on generalized regression
In order to properly position our results in the literature, below we briefly review existing methodological works that are most relevant to our study on deciphering the generalized regression model. The generalized regression model contains many widely-used econometrical and statistical models, including important sub-classes such as the monotonic transformation model and monotonic singleindex model, of the following forms: Y = G(X T β * + ) (monotonic transformation model), (1.2) Y = G(X T β * ) + (monotonic single-index model), (1.3) where the univariate link function G(·) is assumed to be strictly increasing.
There has been research in estimating the generalized regression model or its variants in low dimensions. These works follow two tracks. In the first track, [3] and [7] proposed rank-based M-estimators for directly estimating β * , while treating link functions D(·) and Λ(·, ·) as nuisance parameters. The corresponding estimator β aims to maximize certain rank correlation measurement between Y and X T β, and hence often involves a discontinuous loss function. Based on an estimate of β * , [8], [9], and [10] further proposed methods to estimate the link function D • Λ(·, ·) under different parametric assumptions on link functions. This method is also extended in [11] and [12] to ultra-high dimensional settings via coupling it with a lasso-type penalty. In the second track, [13], [14], [15], among many others, focused on studying more specific models in (1.2) and (1.3), and suggested to approximate D • Λ(·, ·) for estimating β * via exploiting the kernel regression and sieve approximation. These approaches therefore naturally require geometric assumptions and smoothness conditions on D • Λ(·, ·).
In high dimensions when d could be much larger than the sample size n, serious drawbacks are associated with methods in both tracks.
For the second track, first, simultaneous estimation of D • Λ(·, ·) and β * requires extra prior assumptions on D • Λ(·, ·), which may not hold in practice. Secondly, nonparametric estimation is well known to be difficult in high dimensions. This could hurt the estimation of β * . Thirdly, these estimation procedures usually are very sensitive to outliers, which are common in real applications.
For the first track, rank-based M-estimators treat D • Λ(·, ·) as nuisance and hence could potentially gain efficiency and modeling flexibility in estimating β * . However, these rank-based methods also have serious computational and theoretical drawbacks. Computationally, the loss functions of rank-based M-estimators are commonly discontinuous. This violates the regularity conditions in most optimization algorithms [16,17] and makes the optimization problem intractable. It could create serious computational issues, especially in high dimensions. Theoretically, the discontinuity of loss functions adds substantial difficulty for analysis, especially in high dimensions. This further makes the statistical performance of corresponding estimators intractable.

Overview of key results
This article studies a simple smooth alternative to the above rank-based methods. This results in estimators that are computationally efficient to calculate, while keeping the modeling flexibility in estimating β * . The core idea is to replace the non-smooth rank-based loss function L(·) by a smooth loss function L n (·), indexed by n. L n (·) is designed to become closer to L(·) when n increases. A family of smoothing functions is accordingly studied.
Of note, the idea to approximate a non-smooth loss function using a smooth one has proven its successfulness in literature: see, for example, [18], [19], [20], [21], and [12] for smoothing Manski's maximum score estimator, an estimator targeting at the area under the receiver operating characteristic curve (AUC), quantile regression estimator, and Han's rank-based M-estimator in low and high dimensions. However, our results add new observations to this track of works both theoretically and in some new biological applications, which will be outlined below.
Theoretically, although a fairly studied approach in low dimensions, smoothing approximation to non-smooth and non-convex loss functions has received little attention in high dimensions. This is mainly due to the extreme irregularity of original loss functions, which form discontinuous U-processes [22]. Theoretically speaking, smoothing renders two types of errors: (i) the bias which characterizes the error introduced by employing L n (·) to approximate L(·); (ii) the variance which characterizes the error introduced by the randomness of L n (·). Our study characterizes behaviors of both types of errors, based on which one can calculate the amount of smoothing necessary to balance the bias and variance. Our theory holds without any assumption on D • Λ(·, ·) other than monotonically increasing. Additionally, the noise term is allowed to be non-centered, arbitrarily heavy-tailed, including these Cauchy distributed ones with median possibly non-zero, and contain a substantial amount of outliers. Our theory hence confirms, mathematically rigorously, several advantages of smoothed Han's maximum rank estimator.
Practically, the aforementioned advantages of the studied procedure are important for problems where a large number of different regression models need to be fitted. Consider our motivating problem of predicting binding TFs of cis-elements. Since different cis-elements behave differently, the relationship between Y and X may have different functional forms for different cis-elements even though all these functions are monotonic. Additionally, different cis-elements may have different noise distributions. Despite this heterogeneity, different ciselements can be conveniently handled in a unified fashion using our generalized regression procedure regardless of the form of D•Λ(·, ·) and . By contrast, manually constructing parametric models of different forms for differet cis-elements would be difficult due to the large number of models that need to be constructed. The advantages of our approach over existing high dimensional linear and robust regression couterparts (e.g., those in [23] and [24]) are hence clear.
Applying our method to the binding TF prediction problem, we find that our approach is substantially more accurate than the competing lasso method for predicting binding TFs. This demonstrates the practical utility of the smoothing approach and shows that it can provide a solution to a long-standing open problem in biology.

Other related works
Monotonic single-index and transformation models are important subclasses of the generalized regression model. In contrast to the monotonic transformation model, there exists an increasing amount of research studying the monotonic single-index model. These include [25], [15], and [26], to just name a few. However, they require much stronger modeling assumptions, and are sensitive to different types of data contamination.
There are two more related semiparametric approaches to generalize the linear regression model. The first is the general single-index model, with no explicit geometric constraint on G(·) in (1.3). In high dimensions, [27] and [28] studied such a relaxed model. For this, besides being sensitive to data contamination, they need to first sphericalize the data, which is extremely difficult in high dimensions. Recently, [29] proposed an alternative approach that does not require data sphericity. However, boundedness assumption on X, subgaussianity of , and certain smoothness conditions on G(·) are still required.
The second is sufficient dimension reduction. Related literature in this direction includes [30], [31], [32], [33], and [34]. Sufficient dimension reduction approaches only assume Y is independent of X conditional on some linear projections of X. However, a data sphericalization step is also crucial in all these approaches, and in each step of derivation, we need d/n → 0 to proceed. These make the sufficient dimension reduction approaches vulnerable to high dimensionality, which will be further illustrated in simulations and real data experiments.

Paper organization
The rest of the paper is organized as follows. The next section presents our smoothing approach to estimating the generalized regression model. In Section 3, we use this method to solve our motivating scientific problem of decoding transcription factor binding. We demonstrate that this semiparametric regression approach is capable of substantially improving the accuracy over the state-of-the-art alternatives in real data. Section 4 gives theoretical results for understanding the proposed approach and calculates the appropriate smoothing amount. Section 5 provides discussions. Finite-sample simulation results and proofs are relegated to an appendix.  For any x ∈ R, we define the sign function sign(x) := x/|x|, where by convention we write 0/0 = 0. For any two random vectors X, Y ∈ R d , we write X d = Y if X and Y are identically distributed. We let c, C be two generic absolute positive constants, whose actual values may vary at different locations. We write 1(·) to be the indicator function. We let S d−1 represent the ball {v ∈ R d : v 2 = 1}. For any two real sequences {a n } and {b n }, we write a n b n , or equivalently b n a n , if there exists a constant C such that |a n | ≤ C|b n | for any large enough n. We write a n b n if a n b n and b n a n . The symbols P , P , and P are analogous to , , and , but these relations hold stochastically.

Problem setup and methods
In this section, we provide some background on the generalized regression model and associated rank-based M -estimators. We further describe the class of nonconvex smoothness approximations that will be exploited and covered by our theory.
Throughout the paper, we assume Model (1.1) holds, and we have n independent and identically distributed (i.i.d.) observations {(Y i , X i ), i = 1, . . . , n} generated from (1.1). We do not pose any parametric assumption on either the link functions D(·), Λ(·, ·) or the noise term , except for assuming , as long as a > b. For model identifiability, in the sequel, we assume we know at least one specific entry of β * that is nonzero, and fix it to be one. Without loss of generality, we assume β * 1 = 1. The generalized regression model of form (1.1) represents a large class of models. These include the monotonic transformation and single-index models of the forms (1.2) and (1.3). More specifically, the generalized regression model covers the linear regression model, with D • Λ(u, v) = u + v; the Box-Cox transformation model [35], with D • Λ(u, v) = (u + v) λ for some λ > 0; the log-linear model and accelerated failure time model [36], with D • Λ(u, v) = exp(u + v); the binary choice model [37], with D • Λ(u, v) = 1(u + v ≥ 0); the censored regression model [38] We focus on the following rank-based M-estimator, called the maximum rank correlation (MRC), which is first proposed in [3]: The formulation of MRC is a U-process [39,21]. Intrinsically, MRC aims to maximize the Kendall's tau correlation coefficient [40] between Y and X T β, while treating the link functions as nuisance. This is attainable only via assuming D • Λ(·, ·) is monotonic, and has its roots in transformation and copula models [41]. For such models, rank-based approaches have been well-known to be of certain efficiency properties [42,43].
For fully appreciating the rationality of MRC, we provide a proposition that characterizes MRC's relation to the linear regression model, and further addresses the identifiability issue. Although the result in the first part is very straightforward, we do not find an explicit one in the literature that shows this relation, Proposition 2.1. Suppose X is continuous and has a positive definite covariance matrix. We have, (i) when the link function D •Λ(u, v) = u+v corresponds to the linear regression model (without requiring X and to be centered), β * is the unique optimum to maximize the Pearson correlation between Y and X T β up to a scaling: (ii) As long as the link functions D(·) and Λ(·, ·) satisfy (2.1), β * is the unique optimum to maximize the Kendall's tau correlation coefficient between Y and X T β up to a scaling: Throughout, we are interested in the settings where the dimension d could be much larger than the sample size n. In such settings, due to restrictive information obtainable, we have to further regularize the parameter space. In particular, sparsity on the parametric space is commonly assumed. A seemingly natural regularized MRC estimator is as follows: or its equivalent formulation: However, (2.3) involves a discontinuous loss function that has abrupt changes. As stated in the introduction, this incurs serious computational and statistical issues. For tackling such issues, we propose the following smoothing approximation to (2.3), using a set of cumulative distribution functions (CDFs) {F ii (·), 1 ≤ i < i ≤ n} to approximate the indicator function 1(·): Here "local-argmax{·}" and "local-argmin{·}" represent the sets of local maxima and minima for a given function respectively. In addition, we write Of note, α n is an explicitly stated smoothness parameter controlling the approximation speed, presumably increasing to infinity with n. For any pair (i, i ), Note we can equivalently write On one hand, the smoothed loss function L n (β) is close to the MRC loss function in (2.3) when α n is large enough. This guarantees "the bias term" is small enough. On the other hand, L n (β) is smooth, giving computational and statistical guarantees for convergence in optimizing the loss function (2.4).
There are several notable remarks for the proposed smoothing approach.
As will be shown later, the above approximation functions all guarantee efficient inference. For the approximation parameter α n , theoretically, we recommend choosing it using a specific rate. Such a rate depends on n, d, and a sparseness parameter, and will be stated more explicitly in Section 4. In practice, crossvalidation is recommended [44]. Remark 2.3. We note the formulation (2.4) is related to those smoothing approaches introduced in [45], [21], and [12]. Actually, their approaches could be regarded as special cases of (2.4), by taking F ii (·) to be the Gaussian CDF or sigmoid function. However, as will be seen in Sections 3 and 4, we will add new contributions to literature in both theory and applications.

Remark 2.4.
It is also worth comparing the formulation in (2.4) to the other robust regression formulations introduced in the high dimensional statistics literature. The original lasso estimator is well-known to be vulnerable to non-linear link functions [46], and heavy-tailed noise term [47]. [48], [24], and [23] proposed different robust (non)convex approaches to address the possible heavytailedness issue of . In particular, [23] provided a framework for investigating a group of (non)convex loss functions (e.g., Huber's loss and Tukey's biweight), and studied the corresponding estimators. However, these procedures all stick to the linear link function, and hence will lead to inconsistent estimation, invalid statistical inference, and erroneous predictions when the link function is nonlinear. As is discussed in the introduction, non-linearity is common in complex biology systems.

Real data example
In this section, we apply our approach to the motivating scientific problem -predicting TFs that bind to individual cis-elements. As introduced before, solving this problem is crucial for studying gene regulation. DNase-seq experiments can be used to map active cis-elements in a genome-wide fashion. If one can correctly predict which TFs bind to each cis-element, one would be able to couple DNaseseq with computational predictions to efficiently predict genome-wide binding sites of a large number of TFs simultaneously in a new biological sample. This cannot be achieved using any other existing experimental technology.
The conventional approach that predicts binding TFs based on DNA motifs is contingent on the availability of known TF motifs. However, 2/3 of human TFs do not have known motifs. This motivates us to investigate an alternative solution that does not require DNA motif information. This new approach leverages large amounts of publicly available DNase-seq and gene expression data generated by the ENCODE project. Using data from multiple ENCODE cell types, this approach models the relationship between a cis-element's protein-binding activity Y measured by DNase-seq and the gene expression levels of d TFs, X, measured by exon arrays. It predicts binding TFs by identifying important variables in X in the regression model. Below we present real data results from a small-scale pilot study as a way to illustrate our semiparametric regression approach and compare it with other methods. A comprehensive whole-genome analysis and investigation of biology are beyond the scope of this article and will be addressed elsewhere.
In our pilot study, human DNase-seq and matching gene expression exon array data from 57 cell types were obtained from the ENCODE project. After data processing and normalization (see the appendix Section C.1 for details), 169 TFs whose gene expression varies across the 57 cell types were obtained. In parallel, 1,000 cis-elements were randomly sampled from a total of over 10 6 cis-elements in the human genome for method evaluation. For each cis-element, the objective is to identify which of the 169 TFs may bind to it. Let Y be a cis-element's DNase I hypersensitivity (a surrogate for protein-binding activity), measured by its normalized and log-transformed DNase-seq read count, in a cell type. Let X be the normalized gene expression values of the 169 TFs in the same cell type. We want to use Y and X observed from 57 different cell types to learn their relationship and subsequently predict binding TFs.
Six competing methods are compared, listed below. For all methods except random prediction, the non-zero components of the estimate were used to predict which TFs can bind to a cis-element. The code that implements our method has been released online (https://github.com/zji90/RMRCE).
The tuning parameter α n was selected using cross validation and the Gaussian smoothing approximation was used. • Hinge: the indicator function in Han's proposal is replaced by a hinge loss approximation. Specifically, the loss function is changed to: • Lasso: the lasso [49] was used.
• SDR: the sufficient dimension reduction method as proposed by [34].
• Random: TFs randomly sampled from the 169 TFs were treated as the predicted binding TFs.
Among these methods, the lasso represents the state-of-the-art linear model for characterizing the relationship between a response and sparse predictors. SIM and SDR represent competing semiparametric regression models. Hinge is a simple convex relaxation of Han's proposal. We include this comparison to find out whether the smoothed rank correlation is better than convex relaxation. The random method serves as a negative control. For all methods except random prediction, we tried different tuning parameters (α n in RMRCE was selected using cross validation) and calculated the overall accuracy under each parameter setting. Detailed implementation strategy for the methods is presented in Section A.1 in the appendix.
Prediction accuracy of different methods is compared using DNA motif information. The rationale is that DNA motifs were not used to make predictions, therefore they provide an independent source of information for validation. For a method with better prediction accuracy, one should expect that its predicted binding TFs for a cis-element are more likely to be supported by the presence of corresponding motifs in the cis-element. By contrast, it is less likely to find motifs in a cis-element for incorrectly predicted binding TFs. Based on this reasoning, we downloaded all known vertebrate motif matrices from the JASPAR database [50] and mapped them to human genome using the CisGenome software [51] under its default setting. For a given TF, if its motif(s) occurred one or multiple times within 500 base pairs (bp's) of the center of a cis-element, then the TF's motif was called to be found in the cis-element. Let i ∈ {1, . . . , 1, 000} be the index of cis-elements. Let M i denote the set of TFs whose motifs were found in cis-element i. In order to characterize a method's prediction accuracy, we applied the method to predict binding TFs for each cis-element. If a predicted TF does not have any known DNA motif, we lack information for evaluating the correctness of the prediction. Therefore, for each cis-element, we only retained the predicted TFs that had known DNA motifs in the JASPAR database for estimating the prediction accuracy. Among all the 169 TFs, 63 TFs had known DNA motifs and were included in the evaluation. Let A i be the set of retained TFs for cis-element i, and let |A i | be the number of TFs in A i . Let B i = A i M i be the subset of TFs in A i whose motifs were found in cis-element i (and hence validated), and let |B i | be the number of TFs in B i . The prediction accuracy of a method was characterized by the following ratio This ratio is the percentage of all testable predictions that were validated by the presence of DNA motifs. The higher the ratio, the more accurate a method is. Figure 2 compares the accuracy of different methods. For each method, we gradually increased the number of reported TFs by relaxing the tuning parameter (e.g., setting a smaller tuning parameter λ n will result in more TFs being reported by RMRCE), and the accuracy was plotted as a function of increasing number of predicted binding TFs. This figure shows that RMRCE is significantly more accurate than all the other methods, and the random prediction method has the worst performance. Of note, as the number of selected TFs increases, the accuracy of all methods drops (except for the random, which remains stable). This is as expected since the overall signal strength decreases as more TFs are reported. RMRCE performance with different choices of α n is presented in the appendix (Section C.2). A model diagnostic heuristic is developed to check the monotonicity assumption of the proposed model. The detailed descriptions and model diagnostic results in real data application can be found in the appendix (Section C.3).
To shed light on why RMRCE substantially outperformed the lasso, Figure 3 shows data from two cis-elements as examples. For each cis-element, we used the lasso to identify binding TFs from the 169 TFs. The observed response Y was then plotted against its fitted value X T β lasso in Figure 3(a) and Figure 3(c). The blue curve in each plot represents a smooth curve fitted using the generalized additive model with cubic splines and default parameters as implemented in the R package mgcv. It treats Y as response and X T β lasso as independent variable. Clearly, Y and X T β lasso do not have a linear relationship. Moreover, the figures show that the relationship between Y and X T β lasso for different cis-elements have different functional forms. This makes the use of parametric models complicated as one would need to build models with different functional forms for different cis-elements, which would be tedious if one wants to analyze millions of cis-elements in the whole genome. Figure 3(b) and Figure 3(d) show the normal qqplots for the residuals that were obtained from the fitted smooth curves. These figures show that, even when a non-linear smoothed function was fitted to the data, the residuals are still non-normal and may have a complicated distribution. Figure 4 shows a similar analysis using RMRCE. In Figure 4(a) and Figure 4(c), Y was plotted against the RMRCE fitted X T β RMRCE . Clearly, the relationship between Y and X T β RMRCE is non-linear, but such a non-linear, yet monotonically increasing, relationship can be handled by our method in a unified fashion regardless of the specific functionnon-linearal forms. Figure 4(b) and Figure 4(d) are the residual qqplots where the residuals were obtained from the fitted smooth curves in Figure 4(a) and Figure 4(c). These qqplots show that the distributions of the residuals are non-normal. The non-normal residuals, however, can be naturally handled by our generalized regression model.
The above analyses demonstrate the value of our approach for handling noisy, monotonic, non-normal, and non-linear data. Whereas the simple linear regression and marginal screening cannot fully capture the delicateness of such a complex system, RMRCE handles these challenges very well. Thus, RMRCE is a more appealing method to tackle the studied problem than simple linear models based methods such as the lasso. Similar conclusion applies to the comparison with the other four competing methods. In particular, as will be shown in the Section of synthetic data analysis (Section A), Hinge is usually a convex approximation too crude to the studied method, and when the generalized regression model is reasonable in modeling the data (which is hinted by the experimental results in this section), SIM and SDR are much less efficient in handling the data than RMRCE.

Theory
This section is devoted to investigating the statistical performance of the smoothing estimator β αn in (2.4). For characterizing the estimation error of β αn to β * , we separately study the approximation error (bias part) and the stochastic error (variance part).
Before introducing main theorems, let's first introduce some additional notation. Let's first define the population approximation minimizer β * αn as where L n (β) := E L n (β) can be easily derived as Note, analogously, Proposition 2.1 shows . (4.2) We decompose the statistical error β αn − β * 2 into two parts: We will first characterize the approximation error in the next section. Studies of the stochastic error are in Section 4.2.
Because the loss functions L(β), L n (β), and L n (β) are all non-convex, adopting a similar argument as in [23], we first focus on the following specified minima: βr,α n := argmin and β * r,αn := argmin The estimators β r,αn and β * r,αn are constructed merely for theoretical purposes. In addition, the parameter r there, controlling the convexity region around the truth β * , is no need to be specified in practice.
As will be shown in the next two subsections, β r,αn and β * r,αn are local optima to L n (·) and L n (·) under some explicitly characterized regularity conditions. Thus, we will end this section with a general theorem for quantifying the behaviors of some local optimum β αn .

Approximation error
This section first shows L n (·) in (4.1) could well approximate L(·) in (4.2). We will then study the behaviors of minima of L n (·) and L(·).

Generalized regression model
For the generalized regression model of the form (1.1), to guarantee the fast approximation of L n (·) to L(·), we require the following two assumptions on data generating schemes and approximation functions {F ii (·), 1 ≤ i < i ≤ n}.
On one hand, Assumption (A1) is a commonly adopted assumption in the high dimensional regression literature [52,53]. Of note, Assumption (A1) is by no means necessary. By checking the proof of Lemma 4.1, we could readily relax it to a multivariate subgaussian assumption [54]. However, for presentation clearness, this paper is focused on the multivariate Gaussian case. On the other hand, Assumption (A2) is mild. It is easy to check sigmoid function, Gaussian, and double exponential CDFs discussed in Remark 2.2 all satisfy (A2).
where absolute constants C 1 and C 2 are given in Assumption (A2).
Lemma 4.1 shows L n (·) uniformly approximates L(·) linearly with regard to 1/α n . However, it is not enough to show convergence for β * αn to β * . For guaranteeing this, we require a "local strong convexity" of L(·) around the truth β * . To this end, we write to be the second derivative of the population loss function L(·) in (4.2). The next assumption regularizes the behavior of Γ's spectrum.

F. Han et al.
Lemma 4.3 verifies that, when we choose α n to increase to infinity with n, the approximation error decays to zero at a rate α −1/2 n while assuming the boundedness of the eigenvalues of Γ. We are focused on such β * r(γ),αn and the corresponding estimator β r(γ),αn .

Monotonic transformation model
This section aims to study how sharp the results in the last subsection are. We focus on the monotonic transformation model (1.2). We first show Lemma 4.1 cannot be improved too much, even under a much more restrictive monotonic transformation model.
we then have the following three statements true.
(3). If we further assume sigmoid approximation and Gaussian distributed with bounded parameters, we have Furthermore, Combined with the second fact yields, for any β 2 = M , Lemma 4.4 shows, even under a very ideal parametric model, the approximation error in Lemma 4.1 can only be improved slightly from linearly to quadratically decaying.
Secondly, we note Proposition 4.2 relies on an inexplicit assumption (A3). To fully appreciate this proposition, we provide an alternative way to clearly reveal its connection to the data generating parameter Σ. For this, we assume the monotonic transformation model (1.2) and one more assumption.
We note the noise assumption in (A3') is mild. It does not require any moment condition on the noises { i , i = 1, . . . , n}. In particular, the next proposition shows both Gaussian and Cauchy distributions satisfy Assumption (A3'). With Assumption (A3'), we are now ready to prove the local strong convexity of L(·). Lemma 4.6. Assume Assumptions (A1) and (A3') hold. We can then pick positive absolute constant set γ := {γ 1 , γ 2 } with γ 2 /γ 1 − 1 close to 0, such that for some small enough r(γ) > 0 only depending on γ, as long as β−β * 2 ≤ r(γ), we have As a simple consequence, for Σ = I d and β * Remark 4.7. Compared to the result in Proposition 4.2, Lemma 4.6 gives explicit inequalities based on Σ, and the proof techniques exploited are utterly different from these in [22] that are based on Taylor's expansion.

Stochastic error
This section investigates the stochastic error term β αn − β * αn 2 . This falls in the application regime of the high dimensional M-estimators theory [53] with some slight modifications due to the additional constraint on β : β 1 = 1 and β − β * 2 ≤ r.

A general framework for constrained M-estimators
In this section we consider studying general M-estimators. In detail, let's be focused on the following constrained M-estimator: which aims to estimate the truth Here A ⊂ R d is a subset of the d-dimensional real space, L n (·) is the loss function, and P (·) is the penalty term. We pose the following five assumptions on A, L n (·), and P (·). • (B4). (Restricted strong convexity). Define the set where Δ N represents the projection of Δ to N for arbitrary subspace N of R d . We assume, for all Δ ∈ C(M, M ⊥ ; θ * ), we have where κ L , δ L , and τ 2 L (θ * ) are three constants. • (B5). We assume Here P * (·) is the dual norm of P (·).
Assumptions (B1)-(B5) are posed for the purpose of involving estimators like β αn , and hence also (slightly) generalize the corresponding ones in [53]. Under the above assumptions, we have the following theorem hold, which is a straightforward extension to Theorem 1 in [53].

Stochastic error analysis
For analyzing the high dimensional stochastic error, sparsity is commonly encouraged for model identifiability and efficient inference. We adopt this idea. In particular, we assume the following regularity condition: . . , n} are generated in a triangular array setting as in [55]. We assume, for some pre-specified α n , β * r(γ),αn 0 ≤ s n , while the parameter s n changes with n. Note, in this setting, β * is not necessarily sparse.
This formulation of sparseness can be regarded as a working assumption, and intrinsically comes from the sieve idea [56,57]. Note a similar assumption is inexplicitly stated in [24].
For guaranteeing efficient inference, we still require three more assumptions, listed below.
(A4). Assume F ii (·) is twice-differentiable, and there exists an absolute Here Assumption (A4) is easy to check. Actually, it is straightforward to verify sigmoid function, Gaussian, and double exponential CDFs as approximation functions introduced in Remark 2.2 all satisfy Assumption (A4). On the other hand, for Assumption (A6), we require a little bit more stringent assumption on λ min (Σ) than what is required for the lasso. This is because of the additional effort on controlling the smoothness approximation error. Finally, noticing is very easy to calculate, we note Assumption (A5) could be verified empirically.
As a matter of fact, in the appendix, we will show a lot of statistical models satisfy Assumption (A5) via exhaustive simulation studies.
Remark 4.9. Of note, [22] has shown that, under certain regularity conditions, ∇ 2 L(β * ) is positive definite. Using very similar arguments, we can show ∇ 2 L n (β * ) = E∇ 2 L n (β * ) is positive definite. Accordingly, by continuity, Assumption (A5) holds with high probability when d/n is relatively small. However, empirical results in the appendix Section B.1 show that, even when the population design matrix's condition number is very small and for d/n very large (e.g., n = 50 and d = 800), the convexity property still holds with high probability.
Before presenting the main result in this section, let's define an additional parameter. We write the cone representing the index set of nonzero-entries in β * r,αn . We define a parameter κ n controlling the uniform convergence of L n (·) to L n (·) within the cone H: where the expectation is in the outer probability sense. Of note, because both L n (β) and L n (β) are bounded, we have κ n 1, but κ n could be much smaller.
In particular, we have κ n = O(n −1/2 ) by standard U-statistics empirical process theory [58,59,60] when we fix d and choose r(γ) = o (1). More refined theoretical evaluation on the order of κ n could be found in [61].

Main results
Combining Lemmas 4.3 and 4.10, we are now ready to provide the main theorem. It characterizes the consistency property of the proposed smoothing estimators under a specified scaling condition. In particular, when d is fixed, we have β αn − β * 2 P → 0.
In practice, it is very difficult to theoretically calculate exactly how large d is allowed to be using Theorem 4.11. However, a rule of thumb in our case, mimicking the corresponding ones in robust statistics (cf. [62]), is 10 log d ≤ n 1/3 .

Discussions
In the manuscript, we are focused on studying the lasso-type penalty of formulation (2.3). It should be highlighted that SCAD [63] and MCP [64] type penalties could be implemented and studied in the same manner, and similar theoretical and empirical performances can be expected. Since the extension from lassotype penalty to non-convex ones is beyond the interest of this paper, we decide to leave them for future studies. Another important issue which is only mildly touched is model diagnostic check. To our knowledge, there has not been much study on goodness-of-fit test of Han's model (1.1) in high dimensions. In this manuscript we provided several heuristics to check monotonicity of Y and a single index of X (cf. Section 3 and Section C.3 in the appendix). In the future, it will be interesting to develop a theoretically solid test for it. This manuscript has shown the advantage of smoothed maximum rank correlation method both theoretically and empirically. We close this section with a discussion on some limitations. (i) Computationally, as shown in the appendix Section A.2.3, though better than several semiparametric competitors, RMRCE is significantly worse than the lasso in terms of demanding much more time in implementation. (ii) Theoretically, if the true generating model is linear and the noise is Gaussian, RMRCE loses efficiency compared to the lasso. Accordingly, if the practitioner has a strong belief that, for instance, a simple linear model of Gaussian noise applies to the studied data, then the lasso is recommended compared to RMRCE.

Appendix A: Synthetic data analysis
This section empirically examines the finite sample performance of the proposed Regularized Maximum Rank Correlation Estimator (RMRCE) β αn in (2.4) using the synthetic data. We demonstrate various empirical results to compare the proposed methods to the competing methods. Our simulation highlights the distinctive attributes of the proposed method, that is, it has comparable performance to the lasso under the high dimensional linear model, but beats the lasso under non-linear models. The proposed methods outperform Hinge, SIM, and SDR under both linear and non-linear models.

A.1. Algorithm description
We exploit the coordinate descent algorithm [65] without penalization for the first term to solve (2.4). This problem falls in the application regime of the coordinate descent algorithm theory [17]. For the comparison fairness, in the sequel we also do not penalize the first term in implementing the lasso.
One issue in the implementation is on choosing the tuning parameter λ n and the smoothing parameter α n . For tuning λ n and α n , we propose to use five-fold cross-validation. Using five randomly split subsets of equal size, we define the following loss function (A.1): αn (λn)), where n k is the number of data points in the k-th part, and β (−k) αn (λ n ) is obtained from the other 4 parts of the training data with the tuning parameter λ n and smoothing parameter α n . We then select the λ n and α n that maximize CV (λ n , α n ) over a grid of possible values (λ n , α n ). In addition, when α n is pre-chosen, λ n is chosen to optimize (A.1) with the chosen α n .
Another issue in the implementation is on choosing the starting point for the coordinate descent algorithm. For this, we consider the following strategy proposed in [66]. Specifically, let {e j ∈ R d , 1 ≤ j ≤ d} be the standard basis with the j-th entry equal to 1, and 0 at all the other entries. The algorithm starts with β (0) = sign(L j * )e j * by selecting the j * -th coordinate with the index j * that maximizes the absolute value of L j with respect to j ∈ {2, . . . , d}, i.e., In practice, we could combine other starting point candidates like the lasso and sieve solutions, and select the one that maximizes the objective function in (2.2). However, our numerical results indicate that the above simple strategy has worked very well in a variety of applications.

A.2. Synthetic data analysis
This section investigates the empirical performance of the proposed methods via synthetic data analysis. We compare the proposed methods to Hinge, the lasso, SIM, and SDR. The descriptions of these methods are in Section 3. First, we show, under the high dimensional linear model, the proposed methods perform reasonably well compared to the lasso. Secondly, we show the proposed methods beat the lasso under a monotonic transformation model (1.2). Thirdly, the proposed methods beat Hinge, SIM, and SDR under both linear and non-linear models.
We also explore the performance of the proposed methods with the tuning parameter α n chosen by cross validation or set to be fixed values 1, 3, 5, 7, 9. The results show that, as long as α n is comparatively large (α n ≥ 3), the estimates and selection results are not sensitive to α n . In addition, choosing α n by cross validation only leads to marginal improvement of performance compared to fixed α n .
We tried d = 50 and 200 with a series of sample sizes for variable selection and estimation. Due to the space limit, below we mainly show the results about variable selection and estimation performance with d = 200. Similar patterns hold for d = 50, some of whose results are put in later sections.
We first compare the variable selection performance with d = 200. To this end, Figures 5 and 6 plot the receiver operating characteristic (ROC) curves for RMRCE with different choices of tuning parameter α n , and all the competing methods. These results are calculated based on 200 replications. It shows that the proposed methods have overall good variable selection performance compared to the competing methods. Detailed comments are as follows. (i) With sample size increasing, the ROC curves shift towards the upper left corner, as what we expect. (ii) The difference between different smoothing approximations and different choices of tuning parameter α n is very slight via observing Figure 5. The performance of RMRCE with α n selected using cross validation is slightly better than those with fixed α n . (iii) The lasso has slightly better variable selection performance as shown in Figure 6, especially regarding the small sample size. This is reasonable since the simulation setup in (A.2) is the right model for the lasso. (iv) RMRCE has better performance than Hinge. This shows that the smoothed rank correlation in RMRCE is better than the convex relaxation in Hinge. (v) RMRCE has better performance than the other two semiparametric regression methods: SIM and SDR. Table 1 gives the averaged false positive rates and the averaged true positive rates for all the competing methods. Similar observations to Figure 6 were observed.
Secondly, we focus on comparing the estimation errors with d = 200. Tables 2 and 3 present the estimation errors to β * compared to the tuning parameter λ n for RMRCE with different choices of tuning parameter α n , and all the competing methods across 200 replications. Note, for lasso, the estimation error is calculated as β lasso (λ n )/ β lasso 1 (λ n ) − β * 2 . The estimation errors are calculated in the same way for SIM and SDR. The results show that for RMRCE, Hinge, and the lasso, the estimation error decreases first and then increases as tuning parameter increases. For SIM and SDR, the estimation error decreases first and then almost stays constant as the number of selected variables increases. Besides, with increased sample size, the estimation error curves shift downward to 0. Table 2 also shows that the difference between the proposed methods with different approximation functions and different choices of α n is very mild as long as α n is large enough (α n ≥ 3). The performance of RMRCE with α n = 1 is comparatively worse than RMRCE with α n ≥ 3. Table 2 and 3 further show, under the high dimensional linear model (A.2), the proposed methods perform reasonably well, although slightly worse than the lasso 2 . In 2 Note that for fair comparison, the results for the lasso are based on the standardized estimation error β lasso / β lasso 1 − β * 2 with the lasso estimator β lasso of β 0 . Same for SIM and SDR. addition, the proposed methods have better performance than Hinge, SIM, and SDR.

A.2.2. High dimensional generalized regression model
This section considers the monotonic transformation model (1.2) with G(x) = x 3 : where X i and β 0 are the same as in Model (A.2), and i follows the standard normal distribution and is independent of X i . This is the setting where the lasso could still possibly provide a consistent estimator [67]. But the estimation/model selection efficiency is expected to be low, which will be demonstrated empirically here.     Figure 7 with d = 200. It confirms that the lasso as well as other competing methods perform much worse than the proposed methods. Table 4 further illustrates the averaged TPRs and FPRs along with their standard deviations over 200 replications. As shown in Table 4, Hinge, the lasso, SIM, and SDR have much worse performance than the proposed methods. Secondly, Table 5 shows the normalized 2 estimation error. For the lasso, the estimation error is calculated as β lasso (λ n )/ β lasso 1 (λ n )−β * 2 . The estimation errors are calculated in the same way for SIM and SDR. With d = 200, the comparison between the methods confirms the advantage of the proposed method.     Table 6 shows the median of the computation time of 200 replicated runs for RMRCE, Hinge, the lasso, SIM, and SDR with different n and d = 50. For demonstration purpose we only present the computation time with fixed tuning parameters. For RMRCE, the Gaussian approximation is used and the tuning parameters are set as α n = 5 and λ n = 1. For Hinge and the lasso we set the tuning parameter as λ n = 1. SIM and SDR are run with the default parameters.

A.2.3. Comparison of computation time
The results are similar with other choices of the tuning parameters, as long as they are in a reasonable scale. The lasso is the most efficient among all methods, and its computation time is close to zero. RMRCE takes significantly more time to finish compared to the lasso, but is still much faster than SIM and SDR. In addition, RMRCE is slightly slower than Hinge since in Hinge the non-convex rank correlation is replaced with a convex function, making the optimization faster. In summary RMRCE is much faster compared to SIM and SDR, but not as efficient as Hinge and the lasso.

Appendix B: Empirical verification of the theory
This section examines Theorem 4.11 and Assumption (A5) in Section 4.2 using synthetic data.

B.1. Convexity verification
For investigating the stochastic error β αn − β * αn 2 , Assumption (A5) is required to hold. In this section, we verify Assumption (A5) via various empirical studies on the positive definiteness of the following Hessian matrix: For presentation clearness, we focus on the high dimensional linear model (A.2) with β 0 = (5, 4, 3, 2, 1, −1, −3, −5, 0, · · · , 0) T and X i a d dimensional random vector generated from a multivariate normal distribution N d (0, Σ = ((σ jk ))) with σ jk = 0.5 |j−k| for 1 ≤ j, k ≤ d. We further generate i from a mixture of the standard normal and δ = 0% or 20% of the outliers following Cauchy(0, 0.01). The noise is independent of X i . We demonstrate the results under different situations with different noise distributions as well as different smoothing approximations. Table 7 gives the results, considering all three examples of smoothing approximations in Remark 2.2, with pure Gaussian noise and the mixture of Gaussian noise with 20% Cauchy outliers. Here, via exhaustive simulation studies, α n is recommended to be 5 for sigmoid, Gaussian, and double exponential CDF approximations. The results are calculated based on 200 replications.
There are several noteworthy discoveries. First, the performances of all cases are similar. Specifically, as sample size n increases, the proportion of positive definite Hessian matrices increases to 1. In addition, small dimension d leads to higher proportion of convexity. Secondly, for comparison between different noise terms, we find pure Gaussian noise enjoys higher proportion of convexity compared to the mixture noise with large n. Thirdly, for comparison between different smoothing approximations, they perform similarly, though the Gaussian CDF approximation enjoys slightly higher proportions of positive definite Hessian matrices.

B.2. Verification of the theorem
This section verifies the property given in Theorem 4.11. Simulations with different generating models have shown that the proposed methods are robust to the monotonic functions D(·) and Λ(·, ·). Hence to demonstrate the scaling property of the estimation errors β αn − β * 2 , we only focus on the high dimensional linear model (A.2) with i generated from the standard normal and independent of X i . For simplicity, we only show the results for RMRCE with α = 5. The results are similar for other choices of α n .
To this end, Figure 8 illustrates the scaling property of the averaged estimation errors β αn − β * 2 compared to n and n/(s log d) across 200 replications for different approximation functions. The performance is also compared to the lasso. Figure 8 clearly shows a "stacking curve" phenomena. Specifically, regarding the rescaled sample size n/(s log d), the error curves corresponding to different d's are all aligned together. In addition, the error decays to zero as sample size n increases, and increases as d increases. Hence Figure 8 confirms the theoretical discovery in Theorem 4.11. For comparison, we also include the lasso's stacking curves in Figure 8(d) and 8(h). They show similar "stacking curve" phenomena.

C.1. Data processing and normalization for TF prediction
We downloaded DNase-seq and gene expression exon array data from 57 different cell types generated by ENCODE at the University of Washington [68]. Exon array samples were consistently normalized using the GeneBASE software [69]. The output of GeneBASE was gene-level expression values for all genes. From these values, we extracted expression values for all human TFs documented in the Animal Transcription Factor Database (AminalTFDB) [70]. We then filtered out TFs whose expression values were nearly constant across all samples (defined as coefficient of variation < 0.2) because one would expect them to behave like an intercept term in a regression model. These nearly-constant TFs were not expected to provide much information to explain variation of the response variable across different cell types. After filtering, 169 TFs were retained and they were used as predictors X in our regression models. We obtained the observed values of X for all samples. Replicate samples from the same cell type were averaged. This resulted in a gene expression matrix X with 57 rows and 169 columns. Here rows correspond to 57 cell types (n = 57). Columns correspond to 169 TFs (d = 169). Each row is a realization of X.
For the responses, we processed the DNase-seq data from the same 57 cell types as follows. First, the human genome was divided into 200 base pair (bp) non-overlapping windows, yielding approximately 1.65 × 10 7 windows. For each window and each DNase-seq sample, we calculated the DNase-seq signal by counting the number of DNase-seq reads overlapping with the window. The read count was then normalized by the library size. To do so, the window read count was divided by the total read count of the sample and then multiplied with a constant 17,002,867, which is the minimum sample read count of all samples. The normalized counts were log 2 transformed after adding a pseudocount of 1. Since windows without any DNase-seq signal are unlikely to be cis-elements, we only retained windows that had non-zero read count in at least 10 cell types and had coefficient of variation no less than 1. From these retained windows, we randomly sampled 1,000 windows to serve as our testing cis-elements. For each cis-element, we extracted the normalized and log-transformed read count from all DNase-seq samples to serve as the measurements of their DNase I hypersensitivity. Replicate samples from the same cell type were averaged. This produced a matrix Y with 57 rows and 1,000 columns. The rows represent 57 cell types, and the columns represent 1,000 cis-elements. Each column corresponds to a response Y , and it contains the observed values of Y in all 57 cell types.
Lastly, we used X and Y to perform the analyses. Our regression model was fitted for each cis-element (i.e., each column of Y) separately. Each regression has sample size n = 57 and dimension d = 169. An analysis of the whole dataset involves fitting 1,000 regression models.

C.2. RMRCE performance with difference choices of α n
With regard to the real data experiment in Section 3, Figure 9 further compares the accuracy of RMRCE with α n chosen by cross validation or set to be fixed values 1, 3, 5, 7, 9. The performances are similar for RMRCE with difference choices of α n . α n chosen by cross validation only leads to marginal performance gain compared to fixed α n . This agrees with the conclusion from the synthetic data analysis. In real data applications, since different choices of α n (as long as α n is large enough) lead to fairly robust results, we recommend using a fixed α n = 5, although choosing the optimal α n via a cross validation procedure is encouraged if enough time and resource for computation are available.

C.3. A model diagnostic heuristic
The monotonicity assumption is the most important and intrinsic feature of the proposed generalized regression model (1.1), and this section provides another heuristic to examine this assumption in real data applications. Figures 3 and 4 provide some empirical illustrations that Y has a monotonically increasing relationship with X T β RMRCE for two cis-elements. In this section we further discuss a model diagnostic tool to check the monotonicity assumption of our model, and we apply this tool to the real data example.
For model (1.1), our goal is to verify the assumption that the response Y has a monotonically increasing relationship with the linear term X T β * . However, this assumption cannot be directly verified in reality since β * is unknown. Instead we develop a model diagnostic heuristic by replacing β * with β RMRCE and checking whether Y has a monotonically increasing relationship with X T β RMRCE in a cross-validation procedure. Specifically, we split the response and explanatory variables (Y, X) into two parts with equal number of observations: the first part (Y 1 , X 1 ) has the first half of all observations and the second part (Y 2 , X 2 ) has the second half of all observations. We fit RMRCE and obtain β RMRCE on (Y 1 , X 1 ) and calculate the Spearman's rank correlation between Y 2 and X T 2 β RMRCE . We use the one-sided Spearman's rank correlation test [71] to test whether the Spearman's rank correlation is significantly positive.
As a demonstration, we apply the model diagnostic heuristic to the real data example of TF prediction. For the protein-binding activity Y and gene expression level X of each of the 1,000 cis-elements, we split (Y, X) into (Y 1 , X 1 ) and (Y 2 , X 2 ), fit RMRCE on (Y 1 , X 1 ), and lastly perform the one-sided Spearman's rank correlation test on Y 2 and X T 2 β RMRCE to examine the monotonicity assumption. 1,000 p-values are obtained and adjusted for multiple testing using Bonferroni method. With an adjusted p-value smaller than 0.05, we reject the null hypothesis that there is none or a negative association. The whole procedure is repeated for different choices of RMRCE tuning parameters α n and λ n . Table 8 shows the percentage of the adjusted p-values that are smaller than 0.05. Most of the adjusted p-values are smaller than 0.05.

Appendix D: Proofs
This section collects the proofs of the results in the paper. Recall Z ii (β) := (X i −X i ) T β, and S ii := 1(Y i > Y i ). In the sequel, we define p 0 (X i ,

D.1. Proof of Proposition 2.1
Proof. (i) Given the link function D • Λ(u, v) = u + v, we have Y = X T β * + but without requiring X and to be centered. First, by simple calculation, using the fact that X is independent with , it immediately follows .
(ii) This follows directly from the proof of Theorem 1 in [3].
Hence we complete the proof of the proposition.

D.2. Proof of Lemma 4.1
Proof. First of all, recall By the definition of p 0 (X i , X i ; β * ), and p 1 (X i , X i ; β * ), after taking conditional expectation of the response given the covariates, we have According to whether Z ii (β) > 0 or not, we rewrite the above expression as follows, Hence with the expectation E in (D.1) taken with respect to P β (z) ∼ N 1 (0, σ 2 β ), it follows
(2) With Σ = I, we have Using (D.5), with the above joint normal distribution, it follows After further simplification, we deduce dvdu.
Now suppose that the noise term follows a normal distribution with standard deviation σ. It follows that we have F (u) = Φ(u/( √ 2σ)), which renders dvdu.
For sigmoid, Gaussian CDF, and double exponential CDF approximations, we conclude that the corresponding W (ρ) is an increasing function of |ρ| by employing numeric integrations.

D.5. Proof of Proposition 4.5
Proof. We aim to validate the following results for the i.i.d. noise terms { i , i = 1, 2, · · · , n} with Gaussian or Cauchy distribution, i.e., where f (·) represents the PDF of 2 − 1 . For the noise term i ∼ N(μ, as b n → 0, where Φ(·) is the CDF of standard normal distribution. For the noise term i ∼ Cauchy(μ, γ), it is easy to see f (x) = 1/{2πγ(1 + (x/(2γ)) 2 )}. By simple calculation, we deduce With φ(·) the PDF of standard normal distribution, using the fact (D.7) Combining (D.6) and (D.7) leads to Having bounded γ, we immediately have Hence we complete the proof of the proposition for both Gaussian and Cauchy distributed noises.

D.6. Proof of Lemma 4.6
Proof. Recall L(β) = −E S ii 1(Z ii (β) > 0) + (1 − S ii )1(Z ii (β) < 0) . Under the monotonic transformation model (1.2), i.e., Y = G(X T β * + ), we have By the monotonicity of G(·), we can write Notice in L(β) we have both of the two terms Z ii (β * ) and Z ii (β) involved. Using Assumption (A1), we immediately have the following joint distribution for (Z ii (β * ), Z ii (β)) T : which is independent of the indices i and i . For convenience, let's define After taking conditional expectations with respect to the response given the covariates, we have In order to further simplify the above expression, we define the following regions Simple calculation then leads to With the monotonic transformation model (1.2), using monotonicity and the assumption of i.i.d. noise terms { i , i = 1, 2, · · · , n} assumed in Assumption (A1), we have According to the simplified notation in (D.8) and (D.9) under Model (1.2), with the monotonicity of G, it follows where F is the CDF for i − i with i = i . Recall for any p > 0 and any random variable U ≥ 0, With (D.12), applying the above formula to the random variable gives us that By simple calculation, with σ 2 1 := σ 2 β * = 2β * T Σβ * and σ 2 2 := σ 2 β = 2β T Σβ, according to the joint distribution specified in (D.10), we have 14) where Φ(·) is the CDF of the standard normal distribution. Combining (D.13) and (D.14), we deduce Via exchange of taking derivative and integral, we have the first derivative of the above function K(ρ) is of the form where φ(·) represents the PDF for the standard normal distribution. With simple calculation, we can simplify K (ρ) further as follows: It yields K (ρ) = −C(σ 1 /π)(1 + o(1)) as ρ → 1.
This completes the proof.