Template:Lognormal distribution rank regression on Y

Rank Regression on Y
Performing a rank regression on Y requires that a straight line be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized.

The least squares parameter estimation method, or regression analysis, was discussed in Parameter Estimation Chapter and the following equations for regression on Y were derived, and are again applicable:


 * $$\hat{a}=\bar{y}-\hat{b}\bar{x}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}-\hat{b}\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}}{N}$$

and:


 * $$\hat{b}=\frac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}{{y}_{i}}-\tfrac{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{x}_{i}}\underset{i=1}{\overset{N}{\mathop{\sum }}}\,{{y}_{i}}}{N}}{\underset{i=1}{\overset{N}{\mathop{\sum }}}\,x_{i}^{2}-\tfrac{N}}$$

In our case the equations for $${{y}_{i}}$$  and $$x_{i}$$ are:


 * $${{y}_{i}}={{\Phi }^{-1}}\left[ F(t_{i}^{\prime }) \right]$$

and:


 * $${{x}_{i}}=t_{i}^{\prime }$$

where the $$F(t_{i}^{\prime })$$  is estimated from the median ranks. Once $$\widehat{a}$$  and  $$\widehat{b}$$  are obtained, then  $$\widehat{\sigma }$$  and  $$\widehat{\mu }$$  can easily be obtained from the above equations.

Example 2: