Contents

Total Deviation Index

Reading Notes: Total deviation index for measuring individual agreement with applications in laboratory performance and bioequivalence

TDI describes a boundary which covers the majority of the observed differences (percentage for the log transformed data)

Method

Random variables Y and X

Assume finite 1st and 2nd moments with mean $\mu _y ,~ \mu _x$ , variances $\sigma _{y} ^2 ~, \sigma _{x} ^2$ and covariance $\sigma _{y,x}$

D = Y - X

follows a univariate distribution with a mean bias $\mu _y - \mu _x$ and a residual variance $\sigma _{d} ^2 = \sigma _y ^2 + \sigma _x ^2 - 2 \sigma _{xy}$

MSD $\epsilon ^2 = E(D^{2}) = (\mu _y - \mu _x)^2 + \sigma _d ^2$

The proportion of population in |D| less than $\kappa $ is:

$P(|D| < \kappa) = \xi ^2 \left[ \kappa ^2, \sigma _d ^2 , (\mu _y - \mu _x )^2, \beta \right]$

where:

$\xi ^2$ is a cumulative distribution function of $D^2$ that is dependent on $\kappa ^2, \sigma _d ^2 , (\mu _y - \mu _x )^2 $ and unknown $\beta$

Exact TDI $\kappa$ yields $P(|D| < \kappa) = 1 - p$

Thus,

$\kappa _{1-p} ^2 = \xi ^{2(-1)} \left[ 1-p, \sigma _d ^2 , (\mu _y - \mu _x )^2, \beta \right]$

$\xi ^{2(-1)}$ is the inverse function of $\xi ^{2}$, it’s quantile function for some sense.

If X represents target values, Y represents observations. $\kappa _{1-p}$ describes a boundary such that a majority of the observations are within the boundary from their target values. (like confidence interval)

So when D follows the normal distribution ,

$\kappa _{1-p} ^2 = \sigma _d ^2\chi ^{2(-1)} \left[ 1-p, 1 , (\mu _y - \mu _x )^2 / \sigma _d ^2 \right]$

where:

$\chi ^{2(-1)}$ is the inverse function of the cumulative non-central chi-square distribution.

$\chi ^{2(-1)}$ could be driven based on $probit$ function $\Phi$ which don’t have a closed form representation using basic algebraic functions.

Approach 1 for $\kappa _{1-p} ^2$ :

Holder and Hsuan proposed the upper bound for $\kappa _{1-p} ^2$ is:

$ \kappa _{(1-p) + } ^2 = ( c _{(1-p)+} ) \epsilon ^2 $

where $c_{(1-p)+}$ is a constant not depend on D, so $\kappa _{(1-p) + } ^2$ describes that at least 100(1-p) per cent of observations within the boundary from target values, and is proportional to the square root of MSD. __However, asymptotic distribution properties of this approach have not been established. __

Approach 2 for $\kappa _{1-p} ^2$ :

$\kappa _{(1-p)\sim} ^2 = \chi ^{2(-1)} ( 1-p, 1 ) \epsilon ^2$

Note $\kappa _{(1-p)\sim} ^2$ is the upper bound for $\kappa _{1-p} ^2$ when 1- p > 90 percent

__Under allowed tolerance of under/over approximation, this approach behaves well when: __

​ 1-p = 90 percent & $0 < (\mu _y - \mu _x )^2 / \sigma _d ^2 \leq 2$

​ 1-p = 95 percent & $0 < (\mu _y - \mu _x )^2 / \sigma _d ^2 \leq 1$

​ 1-p = 99 percent & $0 < (\mu _y - \mu _x )^2 / \sigma _d ^2 \leq 1/2$

TDI : $\kappa _{(1-p)\sim} = \Phi ^{(-1)} ( 1-p/2 ) | \epsilon |$ , which is the square root of approach 2.

Hypothesis test

Suppose we would like to ensure that approximately (1-p) percent of the absolute differences, between paired observations Y and X, are less than a predetermined constant $\kappa _0$

Our $H_{0}: \kappa _{(1-p) \sim} \geq \kappa _0$ or $H_0 : ln(\epsilon ^2) \geq ln(\kappa _{0} ^2) - ln \left[ \Chi ^{2(-1)} (1-p,1) \right]$

Our $H_{1}: \kappa _{(1-p) \sim} < \kappa _0$ or $H_0 : ln(\epsilon ^2) < ln(\kappa _{0} ^2) - ln \left[ \Chi ^{2(-1)} (1-p,1) \right]$

Note: Here $H_0$ have $\geq$ rather than $>$ is because $\kappa _{(1-p) \sim}$ is somewhat upper-bound of the exact value.

We would reject $H_0$ and accept that (1-p) percent of the absolute differences of the pairs are less than $\kappa _0$ with a type I error $\alpha$ if

upper $(1-\alpha)$ percent confidence limit of $\epsilon ^2$ (observation based) < idea MSD

(use the function $\kappa _{(1-p)\sim} ^2 = \chi ^{2(-1)} ( 1-p, 1 ) \epsilon ^2$ )

An alternative $H _1: \kappa _{ (1-p) \sim } = \kappa _1$ , where $0<\kappa _1 < \kappa _0$ can be used to calculate asymptotic power