Calculation Procedures for Multiple T-test¶
The blunder detection is based on a multiple T-test. We attempt to estimate a gross error in each observation, but only one at a time. This is done by introducing a gross error parameter in the error equation matrix, which must then be expanded with a column. The column gets a value of one in the row for the observation in question. If the observation is a direction, the Schreiber equation of the set must also get a value of one.

Mathematical Foundation¶
The new gross error parameter means that the normal equations must also be expanded with a column, and the constant term column gets one more element. The reduction of normal equations has already been done; it only remains to reduce the new column and the new constant term. The last reduced element in the column is denoted by a and the reduced constant term by b in the figure below.
If we imagine that an identity matrix was placed on the right side and reduced in the usual way, the "unit column" for the gross error parameter would consist of only one term and after reduction would equal \(c = 1/a\).

We thus have the following quantities that we need for the gross error test:
- Estimated gross error:
\(\nabla = \frac{b}{a}\)
- Weight coefficient:
\(q_{\nabla\nabla} = c^2 =\frac{1}{a^2}\)
-
Reduction of error sum of squares:
\(b^2\) -
Standard deviation of unit weight:
\((m_0)^2 = {\frac{\Sigma pvv}{redundancies}}\)
Criterion for Gross Error¶
We can assert a gross error if:
\(\left| \frac{\nabla}{\sqrt{q_{\nabla\nabla}}}\right| > t_{f,1-\alpha/2} \cdot m_0\)
where: * \(f\) is the number of redundancies * \(\alpha\) is the chosen probability of error, e.g., 0.1%
Warning
The prerequisite for being able to assert a gross error in an observation is that the reduced normal equation used is derived from a free adjustment. If the blunder detection is performed based on a constrained adjustment, we cannot assert an error unless the cause is discovered, e.g., as a keypunch error. It can also be justified to discard observations after constrained adjustment in cases where we know with great certainty that the control points contain only negligible errors. An example of this is when the control points consist of first order points.
Error Probability¶
When it comes to the chosen error probability for a single gross error test for an observation, it must be chosen small so that the total error probability after testing all observations becomes acceptable. If the estimated gross errors were independent of each other, the total error probability would be:
\(1-(1-\alpha)^n\)
Even though this independence is not real, in practice one can still assume independence when calculating the total error probability. With a large number of observations, it is common to choose the error probability for individual tests at 0.1%.
Note
Blunder detection can only be performed if the normal equation system contains at least 2 redundant measurements.