Template:Parameter estimation gumpz

Parameter Estimation
To implement the Modified Gompertz growth model, initial values of the parameters $$a$$,  $$b$$ ,  $$c$$  and  $$d$$  must be determined. When analyzing reliability data in RGA, you have the option to enter the reliability values in percent or in decimal format. However, $$a$$  and  $$d$$  will always be returned in decimal format and not in percent. The estimated parameters in RGA are unitless. Given that $$R=d+a{{b}^}$$ and  $$\ln (R-d)=\ln (a)+{{c}^{T}}\ln (b)$$, it follows that  $${{S}_{1}}$$ ,  $${{S}_{2}}$$  and  $${{S}_{3}}$$ , as defined in the derivation of the Standard Gompertz model, can be expressed as functions of  $$d$$.


 * $$\begin{align}

& {{S}_{1}}(d)= & \underset{i=0}{\overset{n-1}{\mathop \sum }}\,\ln ({{R}_{i}}-d)=n\ln (a)+\ln (b)\underset{i=0}{\overset{n-1}{\mathop \sum }}\,{{c}^} \\ & {{S}_{2}}(d)= & \underset{i=n}{\overset{2n-1}{\mathop \sum }}\,\ln ({{R}_{i}}-d)=n\ln (a)+\ln (b)\underset{i=n}{\overset{2n-1}{\mathop \sum }}\,{{c}^} \\ & {{S}_{3}}(d)= & \underset{i=2n}{\overset{m-1}{\mathop \sum }}\,\ln ({{R}_{i}}-d)=n\ln (a)+\ln (b)\underset{i=2n}{\overset{m-1}{\mathop \sum }}\,{{c}^} \end{align}$$

Modifying Eqns. (eq9), (eq10) and (eq11) as functions of $$d$$  yields:


 * $$\begin{align}

& c(d)= & {{\left[ \frac{{{S}_{3}}(d)-{{S}_{2}}(d)}{{{S}_{2}}(d)-{{S}_{1}}(d)} \right]}^{\tfrac{1}{n\cdot I}}} \\ & a(d)= & {{e}^{\left[ \tfrac{1}{n}\left( {{S}_{1}}(d)+\tfrac{{{S}_{2}}(d)-{{S}_{1}}(d)}{1-{{[c(d)]}^{n\cdot I}}} \right) \right]}} \\ & b(d)= & {{e}^{\left[ \tfrac{\left[ {{S}_{2}}(d)-{{S}_{1}}(d) \right]\left[ {{[c(d)]}^{I}}-1 \right]} \right]}} \end{align}$$

where $$I$$  is the time interval increment. At this point, you can use the initial constraint of:


 * $$d+ab=\text{original level of reliability at }T=0$$

Now there are four equations, Eqns. (eq17), (eq18), (eq19) and (eq20), and four unknowns, $$a$$,  $$b$$ ,  $$c$$  and  $$d$$. The simultaneous solution of these equations yields the four initial values for the parameters of the Modified Gompertz model. This procedure is similar to the one discussed before. It starts by using initial estimates of the parameters, $$a$$,  $$b$$ ,  $$c$$  and  $$d$$ , denoted as  $$g_{1}^{(0)},$$   $$g_{2}^{(0)},$$   $$g_{3}^{(0)},$$  and  $$g_{4}^{(0)},$$  where  $$^{(0)}$$  is the iteration number. The Taylor series expansion approximates the mean response, $$f({{T}_{i}},\delta )$$, around the starting values,  $$g_{1}^{(0)},$$   $$g_{2}^{(0)},$$   $$g_{3}^{(0)}$$  and  $$g_{4}^{(0)}$$. For the $${{i}^{th}}$$  observation:


 * $$f({{T}_{i}},\delta )\simeq f({{T}_{i}},{{g}^{(0)}})+\underset{k=1}{\overset{p}{\mathop \sum }}\,{{\left[ \frac{\partial f({{T}_{i}},\delta )}{\partial {{\delta }_{k}}} \right]}_{\delta ={{g}^{(0)}}}}\cdot ({{\delta }_{k}}-g_{k}^{(0)})$$


 * where:


 * $${{g}^{(0)}}=\left[ \begin{matrix}

g_{1}^{(0)} \\ g_{2}^{(0)} \\ g_{3}^{(0)} \\ g_{4}^{(0)} \\ \end{matrix} \right]$$


 * Let:


 * $$\begin{align}

& f_{i}^{(0)}= & f({{T}_{i}},{{g}^{(0)}}) \\ & \nu _{k}^{(0)}= & ({{\delta }_{k}}-g_{k}^{(0)}) \\ & D_{ik}^{(0)}= & {{\left[ \frac{\partial f({{T}_{i}},\delta )}{\partial {{\delta }_{k}}} \right]}_{\delta ={{g}^{(0)}}}} \end{align}$$


 * Therefore:


 * $${{Y}_{i}}=f_{i}^{(0)}+\underset{k=1}{\overset{p}{\mathop \sum }}\,D_{ik}^{(0)}\nu _{k}^{(0)}$$

or by shifting $$f_{i}^{(0)}$$  to the left of the equation:


 * $$Y_{i}^{(0)}-f_{i}^{(0)}=\underset{k=1}{\overset{p}{\mathop \sum }}\,D_{ik}^{(0)}\nu _{k}^{(0)}$$

In matrix form, this is given by:


 * $${{Y}^{(0)}}\simeq {{D}^{(0)}}{{\nu }^{(0)}}$$


 * where:


 * $${{Y}^{(0)}}=\left[ \begin{matrix}

{{Y}_{1}}-f_{1}^{(0)} \\ . \\   .  \\   {{Y}_{N}}-f_{N}^{(0)}  \\ \end{matrix} \right]=\left[ \begin{matrix} {{Y}_{1}}-g_{4}^{(0)}+g_{1}^{(0)}g_{2}^{(0)g_{3}^{(0){{T}_{1}}}} \\ . \\   .  \\   {{Y}_{N}}-g_{4}^{(0)}+g_{1}^{(0)}g_{2}^{(0)g_{3}^{(0){{T}_{N}}}}  \\ \end{matrix} \right]$$


 * $$\begin{align}

& {{D}^{(0)}}= & \left[ \begin{matrix} D_{11}^{(0)} & D_{12}^{(0)} & D_{13}^{(0)} & D_{14}^{(0)} \\ . & . & . & . \\   . & . & . & .  \\   D_{N1}^{(0)} & D_{N2}^{(0)} & D_{N3}^{(0)} & D_{N4}^{(0)}  \\ \end{matrix} \right] \\ & = & \left[ \begin{matrix} g_{2}^{(0)g_{3}^{(0){{T}_{1}}}} & \tfrac{g_{1}^{(0)}}{g_{2}^{(0)}}g_{3}^{(0){{T}_{1}}}g_{2}^{(0)g_{3}^{(0){{T}_{1}}}} & \tfrac{g_{1}^{(0)}}{g_{3}^{(0)}}g_{3}^{(0){{T}_{1}}}\ln (g_{2}^{(0)}){{T}_{1}}g_{2}^{(0)g_{3}^{(0){{T}_{1}}}} & 1 \\ . & . & . & . \\   . & . & . & .  \\   g_{2}^{(0)g_{3}^{(0){{T}_{N}}}} & \tfrac{g_{1}^{(0)}}{g_{2}^{(0)}}g_{3}^{(0){{T}_{N}}}g_{2}^{(0)g_{3}^{(0){{T}_{N}}}} & \tfrac{g_{1}^{(0)}}{g_{3}^{(0)}}g_{3}^{(0){{T}_{N}}}\ln (g_{2}^{(0)}){{T}_{N}}g_{2}^{(0)g_{3}^{(0){{T}_{N}}}} & 1  \\ \end{matrix} \right] \end{align}$$


 * $${{\nu }^{(0)}}=\left[ \begin{matrix}

g_{1}^{(0)} \\ g_{2}^{(0)} \\ g_{3}^{(0)} \\ g_{4}^{(0)} \\ \end{matrix} \right]$$

The same reasoning as before is followed here, and the estimate of the parameters $${{\nu }^{(0)}}$$  is given by:


 * $${{\widehat{\nu }}^{(0)}}={{\left( {{D}^}{{D}^{(0)}} \right)}^{-1}}{{D}^}{{Y}^{(0)}}$$

The revised estimated regression coefficients in matrix form are:


 * $${{g}^{(1)}}={{g}^{(0)}}+{{\widehat{\nu }}^{(0)}}$$

To see if the revised regression coefficients will lead to a reasonable result, the least squares criterion measure, , should be checked. According to the Least Squares Principle, the solution to the values of the parameters are those values that minimize $$Q$$. With the starting coefficients, $${{g}^{(0)}}$$,  $$Q$$  is:
 * $$Q$$


 * $${{Q}^{(0)}}=\underset{i=1}{\overset{N}{\mathop \sum }}\,{{\left( {{Y}_{i}}-f({{T}_{i}},{{g}^{(0)}}) \right)}^{2}}$$

With the coefficients at the end of the first iteration, $${{g}^{(1)}}$$,  $$Q$$  is:


 * $${{Q}^{(1)}}=\underset{i=1}{\overset{N}{\mathop \sum }}\,{{\left( {{Y}_{i}}-f({{T}_{i}},{{g}^{(1)}}) \right)}^{2}}$$

For the Gauss-Newton method to work properly, and to satisfy the Least Squares Principle, the relationship $${{Q}^{(k+1)}}<{{Q}^{(k)}}$$  has to hold for all  $$k$$, meaning that  $${{g}^{(k+1)}}$$  gives a better estimate than  $${{g}^{(k)}}$$. The problem is not yet completely solved. Now $${{g}^{(1)}}$$  are the starting values, producing a new set of values  $${{g}^{(2)}}.$$  The process is continued until the following relationship has been satisfied.


 * $${{Q}^{(s-1)}}-{{Q}^{(s)}}\simeq 0$$

As mentioned previously, when using the Gauss-Newton method or some other estimation procedure, it is advisable to try several sets of starting values to make sure that the solution gives relatively consistent results. Note that RGA uses a different analysis method called the Levenberg-Marquardt. This method utilizes the best features of the Gauss-Newton method and the method of the steepest descent, and occupies a middle ground between these two methods.