Jump to content

Could Sherlock Holmes Have Been a Good Geologist?/The Measurement

From Wikibooks, open books for an open world
"-But, ... the height of the murderer...
how could you find it out, if we have never seen the man?
-Dear Watson, you saw me doing measurements in the room.
The distance between two footsteps showed me clearly the size of our man..."
Sherlock Holmes, in "Study in Scarlet".

Measurement is a quantified observation of a phenomenon and it is one of the more commonly forms employed in investigative work. However, almost never the results of a measurement are informative by itself. Therefore, it is necessary to "refine" them to reach to conclusions based on these data. To "refine" the data we suggest two procedures:

a‑ the search for experimental errors, and

b‑ the synthesis of the information.

The search for experimental errors is based on the determination and subsequent correction of the systematic error and the evaluation of the casual error. To determine the systematic error, we use equation (1).


[1]

Where,

Ssist - is the systematic error of the data

xi - is the basic result (the result of the analysis)

yi - is the control result

n - is the amount of data, which in this type of test should be equal or bigger than 100 pair of events.

If the systematic error oscillates between 0.95 and 1.05 (both values included), we can assume that there isn't any systematic error, so we can calculate directly the casual error. Otherwise, we first eliminate this error by subtracting the systematic error to each basic value in the studied sample.

The casual error is calculated using equation (2).


[2]

For quantitative analysis, Soloviov and Matvieiev (1985) indicate that if the casual error is superior to 1.6, we can't relay on the quality of the data. For qualitative or semi-quantitative analysis, other authors (N.A.S.S.S.R., 1983) propose a limit value of 3. In any case, if the casual error is superior to the established limit, it should be recommended the repetition of the analysis with another methodology or with a more accurate equipment.

The synthesis of the information also consists of two stages:

a- The construction of histograms and polygons of frequencies, and

b‑ The statistic pre‑elaboration of the measurements.

The histograms and polygons of frequencies not only permit a graphic and compact representation of the measurements, but they also can give us other characteristics of the sample, such as the existence of outliers values, the value of the modes, the median, the average, etc. (Ostle, 1973; Valls, 1985).

A detailed explanation of the statistic pre‑elaboration of the data remains out of the objective of this paper. Here we will only mention some steps that are common to most of the geological data.

a- Detection and processing of the statistical outliers and of the extreme values of the studied sample,

b‑ Transformation of the data (optional),

c‑ Determination of the Distribution Law (Kashdan et al., 1979),

d‑ Determination of the median and the relative and absolute modes,

e‑ Determination of the mean, the standard deviation, the variability coefficient, and their confidence intervals, and

f‑ The determination of the thresholds.

Though these are not the only possible variables to be calculated, they are sufficient to refine the results, allowing us to present them in a more understandable way. The use of tables and comparative graphics is very recommendable.